00:00:00.001 Started by upstream project "autotest-per-patch" build number 130935 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.036 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.037 The recommended git tool is: git 00:00:00.037 using credential 00000000-0000-0000-0000-000000000002 00:00:00.038 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.069 Fetching changes from the remote Git repository 00:00:00.071 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.129 Using shallow fetch with depth 1 00:00:00.129 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.129 > git --version # timeout=10 00:00:00.199 > git --version # 'git version 2.39.2' 00:00:00.199 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.252 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.252 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.326 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.337 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.351 Checking out Revision bc56972291bf21b4d2a602b495a165146a8d67a1 (FETCH_HEAD) 00:00:07.351 > git config core.sparsecheckout # timeout=10 00:00:07.363 > git read-tree -mu HEAD # timeout=10 00:00:07.379 > git checkout -f bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=5 00:00:07.397 Commit message: "jenkins/jjb-config: Remove extendedChoice from ipxe-test-images" 00:00:07.397 > git rev-list --no-walk bc56972291bf21b4d2a602b495a165146a8d67a1 # timeout=10 00:00:07.505 [Pipeline] Start of Pipeline 00:00:07.518 [Pipeline] library 00:00:07.520 Loading library shm_lib@master 00:00:07.520 Library shm_lib@master is cached. Copying from home. 00:00:07.536 [Pipeline] node 00:00:07.546 Running on VM-host-WFP7 in /var/jenkins/workspace/raid-vg-autotest_2 00:00:07.548 [Pipeline] { 00:00:07.560 [Pipeline] catchError 00:00:07.562 [Pipeline] { 00:00:07.577 [Pipeline] wrap 00:00:07.588 [Pipeline] { 00:00:07.597 [Pipeline] stage 00:00:07.599 [Pipeline] { (Prologue) 00:00:07.617 [Pipeline] echo 00:00:07.618 Node: VM-host-WFP7 00:00:07.622 [Pipeline] cleanWs 00:00:07.630 [WS-CLEANUP] Deleting project workspace... 00:00:07.630 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.637 [WS-CLEANUP] done 00:00:07.809 [Pipeline] setCustomBuildProperty 00:00:07.887 [Pipeline] httpRequest 00:00:08.808 [Pipeline] echo 00:00:08.810 Sorcerer 10.211.164.101 is alive 00:00:08.822 [Pipeline] retry 00:00:08.824 [Pipeline] { 00:00:08.838 [Pipeline] httpRequest 00:00:08.843 HttpMethod: GET 00:00:08.843 URL: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:08.844 Sending request to url: http://10.211.164.101/packages/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:08.858 Response Code: HTTP/1.1 200 OK 00:00:08.858 Success: Status code 200 is in the accepted range: 200,404 00:00:08.859 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:10.245 [Pipeline] } 00:00:10.263 [Pipeline] // retry 00:00:10.271 [Pipeline] sh 00:00:10.555 + tar --no-same-owner -xf jbp_bc56972291bf21b4d2a602b495a165146a8d67a1.tar.gz 00:00:10.572 [Pipeline] httpRequest 00:00:11.745 [Pipeline] echo 00:00:11.747 Sorcerer 10.211.164.101 is alive 00:00:11.755 [Pipeline] retry 00:00:11.756 [Pipeline] { 00:00:11.772 [Pipeline] httpRequest 00:00:11.777 HttpMethod: GET 00:00:11.778 URL: http://10.211.164.101/packages/spdk_3c49040782a29109e63799cfa9442ad547a3ed8d.tar.gz 00:00:11.779 Sending request to url: http://10.211.164.101/packages/spdk_3c49040782a29109e63799cfa9442ad547a3ed8d.tar.gz 00:00:11.791 Response Code: HTTP/1.1 200 OK 00:00:11.792 Success: Status code 200 is in the accepted range: 200,404 00:00:11.792 Saving response body to /var/jenkins/workspace/raid-vg-autotest_2/spdk_3c49040782a29109e63799cfa9442ad547a3ed8d.tar.gz 00:01:05.172 [Pipeline] } 00:01:05.188 [Pipeline] // retry 00:01:05.196 [Pipeline] sh 00:01:05.479 + tar --no-same-owner -xf spdk_3c49040782a29109e63799cfa9442ad547a3ed8d.tar.gz 00:01:08.030 [Pipeline] sh 00:01:08.313 + git -C spdk log --oneline -n5 00:01:08.313 3c4904078 lib/reduce: unlink meta file 00:01:08.313 92108e0a2 fsdev/aio: add support for null IOs 00:01:08.313 dcdab59d3 lib/reduce: Check return code of read superblock 00:01:08.313 95d9d27f7 bdev/nvme: controller failover/multipath doc change 00:01:08.313 f366dac4a bdev/nvme: removed 'multipath' param from spdk_bdev_nvme_create() 00:01:08.332 [Pipeline] writeFile 00:01:08.347 [Pipeline] sh 00:01:08.681 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:08.693 [Pipeline] sh 00:01:08.976 + cat autorun-spdk.conf 00:01:08.976 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:08.976 SPDK_RUN_ASAN=1 00:01:08.976 SPDK_RUN_UBSAN=1 00:01:08.976 SPDK_TEST_RAID=1 00:01:08.976 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:08.984 RUN_NIGHTLY=0 00:01:08.986 [Pipeline] } 00:01:08.999 [Pipeline] // stage 00:01:09.014 [Pipeline] stage 00:01:09.016 [Pipeline] { (Run VM) 00:01:09.030 [Pipeline] sh 00:01:09.313 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:09.313 + echo 'Start stage prepare_nvme.sh' 00:01:09.313 Start stage prepare_nvme.sh 00:01:09.313 + [[ -n 6 ]] 00:01:09.313 + disk_prefix=ex6 00:01:09.313 + [[ -n /var/jenkins/workspace/raid-vg-autotest_2 ]] 00:01:09.313 + [[ -e /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf ]] 00:01:09.313 + source /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf 00:01:09.313 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:09.313 ++ SPDK_RUN_ASAN=1 00:01:09.313 ++ SPDK_RUN_UBSAN=1 00:01:09.313 ++ SPDK_TEST_RAID=1 00:01:09.313 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:09.313 ++ RUN_NIGHTLY=0 00:01:09.313 + cd /var/jenkins/workspace/raid-vg-autotest_2 00:01:09.313 + nvme_files=() 00:01:09.313 + declare -A nvme_files 00:01:09.313 + backend_dir=/var/lib/libvirt/images/backends 00:01:09.313 + nvme_files['nvme.img']=5G 00:01:09.313 + nvme_files['nvme-cmb.img']=5G 00:01:09.313 + nvme_files['nvme-multi0.img']=4G 00:01:09.313 + nvme_files['nvme-multi1.img']=4G 00:01:09.313 + nvme_files['nvme-multi2.img']=4G 00:01:09.313 + nvme_files['nvme-openstack.img']=8G 00:01:09.313 + nvme_files['nvme-zns.img']=5G 00:01:09.313 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:09.313 + (( SPDK_TEST_FTL == 1 )) 00:01:09.313 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:09.313 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:09.313 + for nvme in "${!nvme_files[@]}" 00:01:09.313 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi2.img -s 4G 00:01:09.313 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:09.313 + for nvme in "${!nvme_files[@]}" 00:01:09.313 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-cmb.img -s 5G 00:01:09.313 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:09.313 + for nvme in "${!nvme_files[@]}" 00:01:09.313 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-openstack.img -s 8G 00:01:09.313 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:09.313 + for nvme in "${!nvme_files[@]}" 00:01:09.313 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-zns.img -s 5G 00:01:09.313 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:09.313 + for nvme in "${!nvme_files[@]}" 00:01:09.313 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi1.img -s 4G 00:01:09.313 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:09.313 + for nvme in "${!nvme_files[@]}" 00:01:09.313 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme-multi0.img -s 4G 00:01:09.313 Formatting '/var/lib/libvirt/images/backends/ex6-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:09.573 + for nvme in "${!nvme_files[@]}" 00:01:09.573 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex6-nvme.img -s 5G 00:01:09.573 Formatting '/var/lib/libvirt/images/backends/ex6-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:09.573 ++ sudo grep -rl ex6-nvme.img /etc/libvirt/qemu 00:01:09.573 + echo 'End stage prepare_nvme.sh' 00:01:09.573 End stage prepare_nvme.sh 00:01:09.585 [Pipeline] sh 00:01:09.870 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:09.870 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 -b /var/lib/libvirt/images/backends/ex6-nvme.img -b /var/lib/libvirt/images/backends/ex6-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img -H -a -v -f fedora39 00:01:09.870 00:01:09.870 DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant 00:01:09.870 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest_2/spdk 00:01:09.870 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest_2 00:01:09.870 HELP=0 00:01:09.870 DRY_RUN=0 00:01:09.870 NVME_FILE=/var/lib/libvirt/images/backends/ex6-nvme.img,/var/lib/libvirt/images/backends/ex6-nvme-multi0.img, 00:01:09.870 NVME_DISKS_TYPE=nvme,nvme, 00:01:09.870 NVME_AUTO_CREATE=0 00:01:09.870 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex6-nvme-multi1.img:/var/lib/libvirt/images/backends/ex6-nvme-multi2.img, 00:01:09.870 NVME_CMB=,, 00:01:09.870 NVME_PMR=,, 00:01:09.870 NVME_ZNS=,, 00:01:09.870 NVME_MS=,, 00:01:09.870 NVME_FDP=,, 00:01:09.870 SPDK_VAGRANT_DISTRO=fedora39 00:01:09.870 SPDK_VAGRANT_VMCPU=10 00:01:09.870 SPDK_VAGRANT_VMRAM=12288 00:01:09.870 SPDK_VAGRANT_PROVIDER=libvirt 00:01:09.870 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:09.870 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:09.870 SPDK_OPENSTACK_NETWORK=0 00:01:09.870 VAGRANT_PACKAGE_BOX=0 00:01:09.870 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest_2/spdk/scripts/vagrant/Vagrantfile 00:01:09.870 FORCE_DISTRO=true 00:01:09.870 VAGRANT_BOX_VERSION= 00:01:09.870 EXTRA_VAGRANTFILES= 00:01:09.870 NIC_MODEL=virtio 00:01:09.870 00:01:09.870 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt' 00:01:09.870 /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest_2 00:01:11.801 Bringing machine 'default' up with 'libvirt' provider... 00:01:12.371 ==> default: Creating image (snapshot of base box volume). 00:01:12.371 ==> default: Creating domain with the following settings... 00:01:12.371 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1728443035_d49877e9dd8fa2de834a 00:01:12.371 ==> default: -- Domain type: kvm 00:01:12.371 ==> default: -- Cpus: 10 00:01:12.371 ==> default: -- Feature: acpi 00:01:12.371 ==> default: -- Feature: apic 00:01:12.371 ==> default: -- Feature: pae 00:01:12.371 ==> default: -- Memory: 12288M 00:01:12.371 ==> default: -- Memory Backing: hugepages: 00:01:12.371 ==> default: -- Management MAC: 00:01:12.371 ==> default: -- Loader: 00:01:12.371 ==> default: -- Nvram: 00:01:12.371 ==> default: -- Base box: spdk/fedora39 00:01:12.371 ==> default: -- Storage pool: default 00:01:12.371 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1728443035_d49877e9dd8fa2de834a.img (20G) 00:01:12.371 ==> default: -- Volume Cache: default 00:01:12.371 ==> default: -- Kernel: 00:01:12.371 ==> default: -- Initrd: 00:01:12.371 ==> default: -- Graphics Type: vnc 00:01:12.371 ==> default: -- Graphics Port: -1 00:01:12.371 ==> default: -- Graphics IP: 127.0.0.1 00:01:12.371 ==> default: -- Graphics Password: Not defined 00:01:12.371 ==> default: -- Video Type: cirrus 00:01:12.371 ==> default: -- Video VRAM: 9216 00:01:12.371 ==> default: -- Sound Type: 00:01:12.371 ==> default: -- Keymap: en-us 00:01:12.371 ==> default: -- TPM Path: 00:01:12.371 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:12.371 ==> default: -- Command line args: 00:01:12.371 ==> default: -> value=-device, 00:01:12.371 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:12.371 ==> default: -> value=-drive, 00:01:12.371 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme.img,if=none,id=nvme-0-drive0, 00:01:12.371 ==> default: -> value=-device, 00:01:12.371 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:12.371 ==> default: -> value=-device, 00:01:12.371 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:12.371 ==> default: -> value=-drive, 00:01:12.371 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:01:12.371 ==> default: -> value=-device, 00:01:12.371 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:12.371 ==> default: -> value=-drive, 00:01:12.371 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:01:12.371 ==> default: -> value=-device, 00:01:12.371 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:12.371 ==> default: -> value=-drive, 00:01:12.371 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex6-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:01:12.371 ==> default: -> value=-device, 00:01:12.371 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:12.631 ==> default: Creating shared folders metadata... 00:01:12.631 ==> default: Starting domain. 00:01:14.009 ==> default: Waiting for domain to get an IP address... 00:01:32.110 ==> default: Waiting for SSH to become available... 00:01:32.110 ==> default: Configuring and enabling network interfaces... 00:01:37.388 default: SSH address: 192.168.121.185:22 00:01:37.388 default: SSH username: vagrant 00:01:37.388 default: SSH auth method: private key 00:01:39.306 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:47.450 ==> default: Mounting SSHFS shared folder... 00:01:49.990 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:49.990 ==> default: Checking Mount.. 00:01:51.366 ==> default: Folder Successfully Mounted! 00:01:51.366 ==> default: Running provisioner: file... 00:01:52.305 default: ~/.gitconfig => .gitconfig 00:01:52.873 00:01:52.873 SUCCESS! 00:01:52.873 00:01:52.873 cd to /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt and type "vagrant ssh" to use. 00:01:52.873 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:52.873 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt" to destroy all trace of vm. 00:01:52.873 00:01:52.881 [Pipeline] } 00:01:52.895 [Pipeline] // stage 00:01:52.903 [Pipeline] dir 00:01:52.903 Running in /var/jenkins/workspace/raid-vg-autotest_2/fedora39-libvirt 00:01:52.905 [Pipeline] { 00:01:52.917 [Pipeline] catchError 00:01:52.918 [Pipeline] { 00:01:52.927 [Pipeline] sh 00:01:53.249 + vagrant ssh-config --host vagrant 00:01:53.249 + sed -ne /^Host/,$p 00:01:53.249 + tee ssh_conf 00:01:55.787 Host vagrant 00:01:55.787 HostName 192.168.121.185 00:01:55.787 User vagrant 00:01:55.787 Port 22 00:01:55.787 UserKnownHostsFile /dev/null 00:01:55.787 StrictHostKeyChecking no 00:01:55.787 PasswordAuthentication no 00:01:55.787 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:55.787 IdentitiesOnly yes 00:01:55.787 LogLevel FATAL 00:01:55.787 ForwardAgent yes 00:01:55.787 ForwardX11 yes 00:01:55.787 00:01:55.801 [Pipeline] withEnv 00:01:55.803 [Pipeline] { 00:01:55.816 [Pipeline] sh 00:01:56.099 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:01:56.099 source /etc/os-release 00:01:56.099 [[ -e /image.version ]] && img=$(< /image.version) 00:01:56.099 # Minimal, systemd-like check. 00:01:56.099 if [[ -e /.dockerenv ]]; then 00:01:56.099 # Clear garbage from the node's name: 00:01:56.099 # agt-er_autotest_547-896 -> autotest_547-896 00:01:56.099 # $HOSTNAME is the actual container id 00:01:56.099 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:56.099 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:56.099 # We can assume this is a mount from a host where container is running, 00:01:56.099 # so fetch its hostname to easily identify the target swarm worker. 00:01:56.099 container="$(< /etc/hostname) ($agent)" 00:01:56.099 else 00:01:56.099 # Fallback 00:01:56.099 container=$agent 00:01:56.099 fi 00:01:56.099 fi 00:01:56.099 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:56.099 00:01:56.371 [Pipeline] } 00:01:56.388 [Pipeline] // withEnv 00:01:56.396 [Pipeline] setCustomBuildProperty 00:01:56.412 [Pipeline] stage 00:01:56.415 [Pipeline] { (Tests) 00:01:56.432 [Pipeline] sh 00:01:56.765 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:57.037 [Pipeline] sh 00:01:57.322 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:57.598 [Pipeline] timeout 00:01:57.599 Timeout set to expire in 1 hr 30 min 00:01:57.601 [Pipeline] { 00:01:57.616 [Pipeline] sh 00:01:57.899 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:01:58.467 HEAD is now at 3c4904078 lib/reduce: unlink meta file 00:01:58.480 [Pipeline] sh 00:01:58.764 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:01:59.038 [Pipeline] sh 00:01:59.332 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest_2/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:59.608 [Pipeline] sh 00:01:59.892 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:00.154 ++ readlink -f spdk_repo 00:02:00.154 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:00.154 + [[ -n /home/vagrant/spdk_repo ]] 00:02:00.154 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:00.154 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:00.155 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:00.155 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:00.155 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:00.155 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:00.155 + cd /home/vagrant/spdk_repo 00:02:00.155 + source /etc/os-release 00:02:00.155 ++ NAME='Fedora Linux' 00:02:00.155 ++ VERSION='39 (Cloud Edition)' 00:02:00.155 ++ ID=fedora 00:02:00.155 ++ VERSION_ID=39 00:02:00.155 ++ VERSION_CODENAME= 00:02:00.155 ++ PLATFORM_ID=platform:f39 00:02:00.155 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:00.155 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:00.155 ++ LOGO=fedora-logo-icon 00:02:00.155 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:00.155 ++ HOME_URL=https://fedoraproject.org/ 00:02:00.155 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:00.155 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:00.155 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:00.155 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:00.155 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:00.155 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:00.155 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:00.155 ++ SUPPORT_END=2024-11-12 00:02:00.155 ++ VARIANT='Cloud Edition' 00:02:00.155 ++ VARIANT_ID=cloud 00:02:00.155 + uname -a 00:02:00.155 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:00.155 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:00.726 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:00.726 Hugepages 00:02:00.726 node hugesize free / total 00:02:00.726 node0 1048576kB 0 / 0 00:02:00.726 node0 2048kB 0 / 0 00:02:00.726 00:02:00.726 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:00.726 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:00.726 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:00.726 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:00.726 + rm -f /tmp/spdk-ld-path 00:02:00.726 + source autorun-spdk.conf 00:02:00.726 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:00.726 ++ SPDK_RUN_ASAN=1 00:02:00.726 ++ SPDK_RUN_UBSAN=1 00:02:00.726 ++ SPDK_TEST_RAID=1 00:02:00.726 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:00.726 ++ RUN_NIGHTLY=0 00:02:00.726 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:00.726 + [[ -n '' ]] 00:02:00.726 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:00.726 + for M in /var/spdk/build-*-manifest.txt 00:02:00.726 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:00.726 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:00.726 + for M in /var/spdk/build-*-manifest.txt 00:02:00.726 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:00.726 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:00.726 + for M in /var/spdk/build-*-manifest.txt 00:02:00.726 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:00.726 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:00.726 ++ uname 00:02:00.726 + [[ Linux == \L\i\n\u\x ]] 00:02:00.726 + sudo dmesg -T 00:02:00.985 + sudo dmesg --clear 00:02:00.985 + dmesg_pid=5420 00:02:00.985 + [[ Fedora Linux == FreeBSD ]] 00:02:00.985 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:00.985 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:00.985 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:00.985 + sudo dmesg -Tw 00:02:00.985 + [[ -x /usr/src/fio-static/fio ]] 00:02:00.985 + export FIO_BIN=/usr/src/fio-static/fio 00:02:00.985 + FIO_BIN=/usr/src/fio-static/fio 00:02:00.985 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:00.985 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:00.985 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:00.985 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:00.985 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:00.985 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:00.985 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:00.985 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:00.985 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:00.985 Test configuration: 00:02:00.985 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:00.985 SPDK_RUN_ASAN=1 00:02:00.985 SPDK_RUN_UBSAN=1 00:02:00.985 SPDK_TEST_RAID=1 00:02:00.985 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:00.985 RUN_NIGHTLY=0 03:04:44 -- common/autotest_common.sh@1680 -- $ [[ n == y ]] 00:02:00.985 03:04:44 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:00.985 03:04:44 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:00.985 03:04:44 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:00.985 03:04:44 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:00.985 03:04:44 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:00.985 03:04:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:00.985 03:04:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:00.985 03:04:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:00.985 03:04:44 -- paths/export.sh@5 -- $ export PATH 00:02:00.985 03:04:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:00.985 03:04:44 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:00.985 03:04:44 -- common/autobuild_common.sh@486 -- $ date +%s 00:02:00.985 03:04:44 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728443084.XXXXXX 00:02:00.985 03:04:44 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728443084.nAU7oe 00:02:00.985 03:04:44 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:02:00.985 03:04:44 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:02:00.985 03:04:44 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:00.985 03:04:44 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:00.985 03:04:44 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:00.985 03:04:44 -- common/autobuild_common.sh@502 -- $ get_config_params 00:02:00.985 03:04:44 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:02:00.985 03:04:44 -- common/autotest_common.sh@10 -- $ set +x 00:02:00.985 03:04:44 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:02:00.985 03:04:44 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:02:00.985 03:04:44 -- pm/common@17 -- $ local monitor 00:02:00.985 03:04:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:00.985 03:04:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:00.985 03:04:44 -- pm/common@25 -- $ sleep 1 00:02:00.985 03:04:44 -- pm/common@21 -- $ date +%s 00:02:00.985 03:04:44 -- pm/common@21 -- $ date +%s 00:02:00.985 03:04:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728443084 00:02:00.985 03:04:44 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1728443084 00:02:01.245 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728443084_collect-vmstat.pm.log 00:02:01.245 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1728443084_collect-cpu-load.pm.log 00:02:02.184 03:04:45 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:02:02.184 03:04:45 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:02.184 03:04:45 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:02.184 03:04:45 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:02.184 03:04:45 -- spdk/autobuild.sh@16 -- $ date -u 00:02:02.184 Wed Oct 9 03:04:45 AM UTC 2024 00:02:02.184 03:04:45 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:02.184 v25.01-pre-42-g3c4904078 00:02:02.184 03:04:45 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:02.184 03:04:45 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:02.184 03:04:45 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:02.184 03:04:45 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:02.184 03:04:45 -- common/autotest_common.sh@10 -- $ set +x 00:02:02.184 ************************************ 00:02:02.184 START TEST asan 00:02:02.184 ************************************ 00:02:02.184 using asan 00:02:02.184 03:04:45 asan -- common/autotest_common.sh@1125 -- $ echo 'using asan' 00:02:02.184 00:02:02.184 real 0m0.000s 00:02:02.184 user 0m0.000s 00:02:02.184 sys 0m0.000s 00:02:02.184 03:04:45 asan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:02.184 03:04:45 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:02.184 ************************************ 00:02:02.184 END TEST asan 00:02:02.184 ************************************ 00:02:02.184 03:04:45 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:02.184 03:04:45 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:02.184 03:04:45 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:02.184 03:04:45 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:02.184 03:04:45 -- common/autotest_common.sh@10 -- $ set +x 00:02:02.184 ************************************ 00:02:02.184 START TEST ubsan 00:02:02.184 ************************************ 00:02:02.184 using ubsan 00:02:02.184 03:04:45 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:02:02.184 00:02:02.184 real 0m0.000s 00:02:02.184 user 0m0.000s 00:02:02.184 sys 0m0.000s 00:02:02.184 03:04:45 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:02.184 03:04:45 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:02.184 ************************************ 00:02:02.184 END TEST ubsan 00:02:02.184 ************************************ 00:02:02.184 03:04:45 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:02.184 03:04:45 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:02.184 03:04:45 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:02.184 03:04:45 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:02.184 03:04:45 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:02.184 03:04:45 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:02.184 03:04:45 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:02.184 03:04:45 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:02.184 03:04:45 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:02:02.444 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:02.444 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:02.703 Using 'verbs' RDMA provider 00:02:18.528 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:33.434 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:34.004 Creating mk/config.mk...done. 00:02:34.004 Creating mk/cc.flags.mk...done. 00:02:34.004 Type 'make' to build. 00:02:34.004 03:05:17 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:34.004 03:05:17 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:34.004 03:05:17 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:34.004 03:05:17 -- common/autotest_common.sh@10 -- $ set +x 00:02:34.004 ************************************ 00:02:34.004 START TEST make 00:02:34.004 ************************************ 00:02:34.004 03:05:17 make -- common/autotest_common.sh@1125 -- $ make -j10 00:02:34.263 make[1]: Nothing to be done for 'all'. 00:02:46.469 The Meson build system 00:02:46.469 Version: 1.5.0 00:02:46.469 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:46.469 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:46.469 Build type: native build 00:02:46.469 Program cat found: YES (/usr/bin/cat) 00:02:46.469 Project name: DPDK 00:02:46.469 Project version: 24.03.0 00:02:46.469 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:46.469 C linker for the host machine: cc ld.bfd 2.40-14 00:02:46.469 Host machine cpu family: x86_64 00:02:46.469 Host machine cpu: x86_64 00:02:46.469 Message: ## Building in Developer Mode ## 00:02:46.469 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:46.469 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:46.469 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:46.469 Program python3 found: YES (/usr/bin/python3) 00:02:46.469 Program cat found: YES (/usr/bin/cat) 00:02:46.469 Compiler for C supports arguments -march=native: YES 00:02:46.469 Checking for size of "void *" : 8 00:02:46.469 Checking for size of "void *" : 8 (cached) 00:02:46.469 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:46.469 Library m found: YES 00:02:46.469 Library numa found: YES 00:02:46.469 Has header "numaif.h" : YES 00:02:46.469 Library fdt found: NO 00:02:46.469 Library execinfo found: NO 00:02:46.469 Has header "execinfo.h" : YES 00:02:46.469 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:46.469 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:46.469 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:46.469 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:46.469 Run-time dependency openssl found: YES 3.1.1 00:02:46.469 Run-time dependency libpcap found: YES 1.10.4 00:02:46.469 Has header "pcap.h" with dependency libpcap: YES 00:02:46.469 Compiler for C supports arguments -Wcast-qual: YES 00:02:46.469 Compiler for C supports arguments -Wdeprecated: YES 00:02:46.469 Compiler for C supports arguments -Wformat: YES 00:02:46.469 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:46.469 Compiler for C supports arguments -Wformat-security: NO 00:02:46.469 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:46.469 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:46.469 Compiler for C supports arguments -Wnested-externs: YES 00:02:46.469 Compiler for C supports arguments -Wold-style-definition: YES 00:02:46.469 Compiler for C supports arguments -Wpointer-arith: YES 00:02:46.469 Compiler for C supports arguments -Wsign-compare: YES 00:02:46.469 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:46.469 Compiler for C supports arguments -Wundef: YES 00:02:46.469 Compiler for C supports arguments -Wwrite-strings: YES 00:02:46.469 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:46.469 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:46.469 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:46.469 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:46.469 Program objdump found: YES (/usr/bin/objdump) 00:02:46.469 Compiler for C supports arguments -mavx512f: YES 00:02:46.469 Checking if "AVX512 checking" compiles: YES 00:02:46.469 Fetching value of define "__SSE4_2__" : 1 00:02:46.469 Fetching value of define "__AES__" : 1 00:02:46.469 Fetching value of define "__AVX__" : 1 00:02:46.469 Fetching value of define "__AVX2__" : 1 00:02:46.469 Fetching value of define "__AVX512BW__" : 1 00:02:46.469 Fetching value of define "__AVX512CD__" : 1 00:02:46.469 Fetching value of define "__AVX512DQ__" : 1 00:02:46.469 Fetching value of define "__AVX512F__" : 1 00:02:46.469 Fetching value of define "__AVX512VL__" : 1 00:02:46.469 Fetching value of define "__PCLMUL__" : 1 00:02:46.469 Fetching value of define "__RDRND__" : 1 00:02:46.470 Fetching value of define "__RDSEED__" : 1 00:02:46.470 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:46.470 Fetching value of define "__znver1__" : (undefined) 00:02:46.470 Fetching value of define "__znver2__" : (undefined) 00:02:46.470 Fetching value of define "__znver3__" : (undefined) 00:02:46.470 Fetching value of define "__znver4__" : (undefined) 00:02:46.470 Library asan found: YES 00:02:46.470 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:46.470 Message: lib/log: Defining dependency "log" 00:02:46.470 Message: lib/kvargs: Defining dependency "kvargs" 00:02:46.470 Message: lib/telemetry: Defining dependency "telemetry" 00:02:46.470 Library rt found: YES 00:02:46.470 Checking for function "getentropy" : NO 00:02:46.470 Message: lib/eal: Defining dependency "eal" 00:02:46.470 Message: lib/ring: Defining dependency "ring" 00:02:46.470 Message: lib/rcu: Defining dependency "rcu" 00:02:46.470 Message: lib/mempool: Defining dependency "mempool" 00:02:46.470 Message: lib/mbuf: Defining dependency "mbuf" 00:02:46.470 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:46.470 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:46.470 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:46.470 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:46.470 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:46.470 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:46.470 Compiler for C supports arguments -mpclmul: YES 00:02:46.470 Compiler for C supports arguments -maes: YES 00:02:46.470 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:46.470 Compiler for C supports arguments -mavx512bw: YES 00:02:46.470 Compiler for C supports arguments -mavx512dq: YES 00:02:46.470 Compiler for C supports arguments -mavx512vl: YES 00:02:46.470 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:46.470 Compiler for C supports arguments -mavx2: YES 00:02:46.470 Compiler for C supports arguments -mavx: YES 00:02:46.470 Message: lib/net: Defining dependency "net" 00:02:46.470 Message: lib/meter: Defining dependency "meter" 00:02:46.470 Message: lib/ethdev: Defining dependency "ethdev" 00:02:46.470 Message: lib/pci: Defining dependency "pci" 00:02:46.470 Message: lib/cmdline: Defining dependency "cmdline" 00:02:46.470 Message: lib/hash: Defining dependency "hash" 00:02:46.470 Message: lib/timer: Defining dependency "timer" 00:02:46.470 Message: lib/compressdev: Defining dependency "compressdev" 00:02:46.470 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:46.470 Message: lib/dmadev: Defining dependency "dmadev" 00:02:46.470 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:46.470 Message: lib/power: Defining dependency "power" 00:02:46.470 Message: lib/reorder: Defining dependency "reorder" 00:02:46.470 Message: lib/security: Defining dependency "security" 00:02:46.470 Has header "linux/userfaultfd.h" : YES 00:02:46.470 Has header "linux/vduse.h" : YES 00:02:46.470 Message: lib/vhost: Defining dependency "vhost" 00:02:46.470 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:46.470 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:46.470 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:46.470 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:46.470 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:46.470 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:46.470 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:46.470 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:46.470 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:46.470 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:46.470 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:46.470 Configuring doxy-api-html.conf using configuration 00:02:46.470 Configuring doxy-api-man.conf using configuration 00:02:46.470 Program mandb found: YES (/usr/bin/mandb) 00:02:46.470 Program sphinx-build found: NO 00:02:46.470 Configuring rte_build_config.h using configuration 00:02:46.470 Message: 00:02:46.470 ================= 00:02:46.470 Applications Enabled 00:02:46.470 ================= 00:02:46.470 00:02:46.470 apps: 00:02:46.470 00:02:46.470 00:02:46.470 Message: 00:02:46.470 ================= 00:02:46.470 Libraries Enabled 00:02:46.470 ================= 00:02:46.470 00:02:46.470 libs: 00:02:46.470 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:46.470 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:46.470 cryptodev, dmadev, power, reorder, security, vhost, 00:02:46.470 00:02:46.470 Message: 00:02:46.470 =============== 00:02:46.470 Drivers Enabled 00:02:46.470 =============== 00:02:46.470 00:02:46.470 common: 00:02:46.470 00:02:46.470 bus: 00:02:46.470 pci, vdev, 00:02:46.470 mempool: 00:02:46.470 ring, 00:02:46.470 dma: 00:02:46.470 00:02:46.470 net: 00:02:46.470 00:02:46.470 crypto: 00:02:46.470 00:02:46.470 compress: 00:02:46.470 00:02:46.470 vdpa: 00:02:46.470 00:02:46.470 00:02:46.470 Message: 00:02:46.470 ================= 00:02:46.470 Content Skipped 00:02:46.470 ================= 00:02:46.470 00:02:46.470 apps: 00:02:46.470 dumpcap: explicitly disabled via build config 00:02:46.470 graph: explicitly disabled via build config 00:02:46.470 pdump: explicitly disabled via build config 00:02:46.470 proc-info: explicitly disabled via build config 00:02:46.470 test-acl: explicitly disabled via build config 00:02:46.470 test-bbdev: explicitly disabled via build config 00:02:46.470 test-cmdline: explicitly disabled via build config 00:02:46.470 test-compress-perf: explicitly disabled via build config 00:02:46.471 test-crypto-perf: explicitly disabled via build config 00:02:46.471 test-dma-perf: explicitly disabled via build config 00:02:46.471 test-eventdev: explicitly disabled via build config 00:02:46.471 test-fib: explicitly disabled via build config 00:02:46.471 test-flow-perf: explicitly disabled via build config 00:02:46.471 test-gpudev: explicitly disabled via build config 00:02:46.471 test-mldev: explicitly disabled via build config 00:02:46.471 test-pipeline: explicitly disabled via build config 00:02:46.471 test-pmd: explicitly disabled via build config 00:02:46.471 test-regex: explicitly disabled via build config 00:02:46.471 test-sad: explicitly disabled via build config 00:02:46.471 test-security-perf: explicitly disabled via build config 00:02:46.471 00:02:46.471 libs: 00:02:46.471 argparse: explicitly disabled via build config 00:02:46.471 metrics: explicitly disabled via build config 00:02:46.471 acl: explicitly disabled via build config 00:02:46.471 bbdev: explicitly disabled via build config 00:02:46.471 bitratestats: explicitly disabled via build config 00:02:46.471 bpf: explicitly disabled via build config 00:02:46.471 cfgfile: explicitly disabled via build config 00:02:46.471 distributor: explicitly disabled via build config 00:02:46.471 efd: explicitly disabled via build config 00:02:46.471 eventdev: explicitly disabled via build config 00:02:46.471 dispatcher: explicitly disabled via build config 00:02:46.471 gpudev: explicitly disabled via build config 00:02:46.471 gro: explicitly disabled via build config 00:02:46.471 gso: explicitly disabled via build config 00:02:46.471 ip_frag: explicitly disabled via build config 00:02:46.471 jobstats: explicitly disabled via build config 00:02:46.471 latencystats: explicitly disabled via build config 00:02:46.471 lpm: explicitly disabled via build config 00:02:46.471 member: explicitly disabled via build config 00:02:46.471 pcapng: explicitly disabled via build config 00:02:46.471 rawdev: explicitly disabled via build config 00:02:46.471 regexdev: explicitly disabled via build config 00:02:46.471 mldev: explicitly disabled via build config 00:02:46.471 rib: explicitly disabled via build config 00:02:46.471 sched: explicitly disabled via build config 00:02:46.471 stack: explicitly disabled via build config 00:02:46.471 ipsec: explicitly disabled via build config 00:02:46.471 pdcp: explicitly disabled via build config 00:02:46.471 fib: explicitly disabled via build config 00:02:46.471 port: explicitly disabled via build config 00:02:46.471 pdump: explicitly disabled via build config 00:02:46.471 table: explicitly disabled via build config 00:02:46.471 pipeline: explicitly disabled via build config 00:02:46.471 graph: explicitly disabled via build config 00:02:46.471 node: explicitly disabled via build config 00:02:46.471 00:02:46.471 drivers: 00:02:46.471 common/cpt: not in enabled drivers build config 00:02:46.471 common/dpaax: not in enabled drivers build config 00:02:46.471 common/iavf: not in enabled drivers build config 00:02:46.471 common/idpf: not in enabled drivers build config 00:02:46.471 common/ionic: not in enabled drivers build config 00:02:46.471 common/mvep: not in enabled drivers build config 00:02:46.471 common/octeontx: not in enabled drivers build config 00:02:46.471 bus/auxiliary: not in enabled drivers build config 00:02:46.471 bus/cdx: not in enabled drivers build config 00:02:46.471 bus/dpaa: not in enabled drivers build config 00:02:46.471 bus/fslmc: not in enabled drivers build config 00:02:46.471 bus/ifpga: not in enabled drivers build config 00:02:46.471 bus/platform: not in enabled drivers build config 00:02:46.471 bus/uacce: not in enabled drivers build config 00:02:46.471 bus/vmbus: not in enabled drivers build config 00:02:46.471 common/cnxk: not in enabled drivers build config 00:02:46.471 common/mlx5: not in enabled drivers build config 00:02:46.471 common/nfp: not in enabled drivers build config 00:02:46.471 common/nitrox: not in enabled drivers build config 00:02:46.471 common/qat: not in enabled drivers build config 00:02:46.471 common/sfc_efx: not in enabled drivers build config 00:02:46.471 mempool/bucket: not in enabled drivers build config 00:02:46.471 mempool/cnxk: not in enabled drivers build config 00:02:46.471 mempool/dpaa: not in enabled drivers build config 00:02:46.471 mempool/dpaa2: not in enabled drivers build config 00:02:46.471 mempool/octeontx: not in enabled drivers build config 00:02:46.471 mempool/stack: not in enabled drivers build config 00:02:46.471 dma/cnxk: not in enabled drivers build config 00:02:46.471 dma/dpaa: not in enabled drivers build config 00:02:46.471 dma/dpaa2: not in enabled drivers build config 00:02:46.471 dma/hisilicon: not in enabled drivers build config 00:02:46.471 dma/idxd: not in enabled drivers build config 00:02:46.471 dma/ioat: not in enabled drivers build config 00:02:46.471 dma/skeleton: not in enabled drivers build config 00:02:46.471 net/af_packet: not in enabled drivers build config 00:02:46.471 net/af_xdp: not in enabled drivers build config 00:02:46.471 net/ark: not in enabled drivers build config 00:02:46.471 net/atlantic: not in enabled drivers build config 00:02:46.471 net/avp: not in enabled drivers build config 00:02:46.471 net/axgbe: not in enabled drivers build config 00:02:46.471 net/bnx2x: not in enabled drivers build config 00:02:46.471 net/bnxt: not in enabled drivers build config 00:02:46.471 net/bonding: not in enabled drivers build config 00:02:46.471 net/cnxk: not in enabled drivers build config 00:02:46.471 net/cpfl: not in enabled drivers build config 00:02:46.471 net/cxgbe: not in enabled drivers build config 00:02:46.471 net/dpaa: not in enabled drivers build config 00:02:46.471 net/dpaa2: not in enabled drivers build config 00:02:46.471 net/e1000: not in enabled drivers build config 00:02:46.471 net/ena: not in enabled drivers build config 00:02:46.471 net/enetc: not in enabled drivers build config 00:02:46.471 net/enetfec: not in enabled drivers build config 00:02:46.471 net/enic: not in enabled drivers build config 00:02:46.471 net/failsafe: not in enabled drivers build config 00:02:46.471 net/fm10k: not in enabled drivers build config 00:02:46.471 net/gve: not in enabled drivers build config 00:02:46.471 net/hinic: not in enabled drivers build config 00:02:46.471 net/hns3: not in enabled drivers build config 00:02:46.471 net/i40e: not in enabled drivers build config 00:02:46.471 net/iavf: not in enabled drivers build config 00:02:46.472 net/ice: not in enabled drivers build config 00:02:46.472 net/idpf: not in enabled drivers build config 00:02:46.472 net/igc: not in enabled drivers build config 00:02:46.472 net/ionic: not in enabled drivers build config 00:02:46.472 net/ipn3ke: not in enabled drivers build config 00:02:46.472 net/ixgbe: not in enabled drivers build config 00:02:46.472 net/mana: not in enabled drivers build config 00:02:46.472 net/memif: not in enabled drivers build config 00:02:46.472 net/mlx4: not in enabled drivers build config 00:02:46.472 net/mlx5: not in enabled drivers build config 00:02:46.472 net/mvneta: not in enabled drivers build config 00:02:46.472 net/mvpp2: not in enabled drivers build config 00:02:46.472 net/netvsc: not in enabled drivers build config 00:02:46.472 net/nfb: not in enabled drivers build config 00:02:46.472 net/nfp: not in enabled drivers build config 00:02:46.472 net/ngbe: not in enabled drivers build config 00:02:46.472 net/null: not in enabled drivers build config 00:02:46.472 net/octeontx: not in enabled drivers build config 00:02:46.472 net/octeon_ep: not in enabled drivers build config 00:02:46.472 net/pcap: not in enabled drivers build config 00:02:46.472 net/pfe: not in enabled drivers build config 00:02:46.472 net/qede: not in enabled drivers build config 00:02:46.472 net/ring: not in enabled drivers build config 00:02:46.472 net/sfc: not in enabled drivers build config 00:02:46.472 net/softnic: not in enabled drivers build config 00:02:46.472 net/tap: not in enabled drivers build config 00:02:46.472 net/thunderx: not in enabled drivers build config 00:02:46.472 net/txgbe: not in enabled drivers build config 00:02:46.472 net/vdev_netvsc: not in enabled drivers build config 00:02:46.472 net/vhost: not in enabled drivers build config 00:02:46.472 net/virtio: not in enabled drivers build config 00:02:46.472 net/vmxnet3: not in enabled drivers build config 00:02:46.472 raw/*: missing internal dependency, "rawdev" 00:02:46.472 crypto/armv8: not in enabled drivers build config 00:02:46.472 crypto/bcmfs: not in enabled drivers build config 00:02:46.472 crypto/caam_jr: not in enabled drivers build config 00:02:46.472 crypto/ccp: not in enabled drivers build config 00:02:46.472 crypto/cnxk: not in enabled drivers build config 00:02:46.472 crypto/dpaa_sec: not in enabled drivers build config 00:02:46.472 crypto/dpaa2_sec: not in enabled drivers build config 00:02:46.472 crypto/ipsec_mb: not in enabled drivers build config 00:02:46.472 crypto/mlx5: not in enabled drivers build config 00:02:46.472 crypto/mvsam: not in enabled drivers build config 00:02:46.472 crypto/nitrox: not in enabled drivers build config 00:02:46.472 crypto/null: not in enabled drivers build config 00:02:46.472 crypto/octeontx: not in enabled drivers build config 00:02:46.472 crypto/openssl: not in enabled drivers build config 00:02:46.472 crypto/scheduler: not in enabled drivers build config 00:02:46.472 crypto/uadk: not in enabled drivers build config 00:02:46.472 crypto/virtio: not in enabled drivers build config 00:02:46.472 compress/isal: not in enabled drivers build config 00:02:46.472 compress/mlx5: not in enabled drivers build config 00:02:46.472 compress/nitrox: not in enabled drivers build config 00:02:46.472 compress/octeontx: not in enabled drivers build config 00:02:46.472 compress/zlib: not in enabled drivers build config 00:02:46.472 regex/*: missing internal dependency, "regexdev" 00:02:46.472 ml/*: missing internal dependency, "mldev" 00:02:46.472 vdpa/ifc: not in enabled drivers build config 00:02:46.472 vdpa/mlx5: not in enabled drivers build config 00:02:46.472 vdpa/nfp: not in enabled drivers build config 00:02:46.472 vdpa/sfc: not in enabled drivers build config 00:02:46.472 event/*: missing internal dependency, "eventdev" 00:02:46.472 baseband/*: missing internal dependency, "bbdev" 00:02:46.472 gpu/*: missing internal dependency, "gpudev" 00:02:46.472 00:02:46.472 00:02:46.472 Build targets in project: 85 00:02:46.472 00:02:46.472 DPDK 24.03.0 00:02:46.472 00:02:46.472 User defined options 00:02:46.472 buildtype : debug 00:02:46.472 default_library : shared 00:02:46.472 libdir : lib 00:02:46.472 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:46.472 b_sanitize : address 00:02:46.472 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:46.472 c_link_args : 00:02:46.472 cpu_instruction_set: native 00:02:46.472 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:46.472 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:46.472 enable_docs : false 00:02:46.472 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:46.472 enable_kmods : false 00:02:46.472 max_lcores : 128 00:02:46.472 tests : false 00:02:46.472 00:02:46.472 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:46.472 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:46.472 [1/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:46.472 [2/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:46.472 [3/268] Linking static target lib/librte_kvargs.a 00:02:46.472 [4/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:46.732 [5/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:46.732 [6/268] Linking static target lib/librte_log.a 00:02:46.990 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:46.990 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:46.990 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:46.990 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:46.990 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:46.990 [12/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.248 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:47.248 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:47.248 [15/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:47.248 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:47.248 [17/268] Linking static target lib/librte_telemetry.a 00:02:47.248 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:47.816 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:47.816 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:47.816 [21/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:47.816 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:47.816 [23/268] Linking target lib/librte_log.so.24.1 00:02:47.816 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:47.816 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:47.816 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:48.075 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:48.075 [28/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:48.075 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:48.075 [30/268] Linking target lib/librte_kvargs.so.24.1 00:02:48.075 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.075 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:48.335 [33/268] Linking target lib/librte_telemetry.so.24.1 00:02:48.335 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:48.335 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:48.335 [36/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:48.335 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:48.335 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:48.335 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:48.335 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:48.335 [41/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:48.594 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:48.594 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:48.594 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:48.594 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:48.853 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:48.853 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:49.113 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:49.113 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:49.113 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:49.113 [51/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:49.113 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:49.113 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:49.372 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:49.372 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:49.372 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:49.372 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:49.632 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:49.632 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:49.891 [60/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:49.891 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:49.891 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:49.891 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:49.891 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:49.891 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:49.891 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:50.151 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:50.151 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:50.410 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:50.410 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:50.410 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:50.670 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:50.670 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:50.670 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:50.670 [75/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:50.670 [76/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:50.670 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:50.670 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:50.932 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:50.932 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:50.932 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:50.932 [82/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:51.191 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:51.191 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:51.191 [85/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:51.191 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:51.450 [87/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:51.450 [88/268] Linking static target lib/librte_eal.a 00:02:51.450 [89/268] Linking static target lib/librte_ring.a 00:02:51.450 [90/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:51.450 [91/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:51.709 [92/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:51.709 [93/268] Linking static target lib/librte_rcu.a 00:02:51.709 [94/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:51.709 [95/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:51.709 [96/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:51.967 [97/268] Linking static target lib/librte_mempool.a 00:02:51.967 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:51.967 [99/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:51.967 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:52.225 [101/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:52.225 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:52.225 [103/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.225 [104/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:52.225 [105/268] Linking static target lib/librte_mbuf.a 00:02:52.225 [106/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:52.484 [107/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:52.484 [108/268] Linking static target lib/librte_meter.a 00:02:52.484 [109/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:52.484 [110/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:52.484 [111/268] Linking static target lib/librte_net.a 00:02:52.744 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:52.744 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:52.744 [114/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:52.744 [115/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:53.002 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:53.002 [117/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.261 [118/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.520 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:53.520 [120/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:53.520 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:53.520 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:53.788 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:53.788 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:53.788 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:54.047 [126/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:54.047 [127/268] Linking static target lib/librte_pci.a 00:02:54.047 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:54.047 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:54.306 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:54.306 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:54.306 [132/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:54.306 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:54.306 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:54.306 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:54.306 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:54.306 [137/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:54.306 [138/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:54.306 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:54.306 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:54.566 [141/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:54.566 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:54.566 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:54.566 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:54.824 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:54.824 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:54.824 [147/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:54.824 [148/268] Linking static target lib/librte_cmdline.a 00:02:54.824 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:55.082 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:55.082 [151/268] Linking static target lib/librte_timer.a 00:02:55.082 [152/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:55.082 [153/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:55.341 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:55.599 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:55.599 [156/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:55.599 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:55.599 [158/268] Linking static target lib/librte_compressdev.a 00:02:55.599 [159/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:55.599 [160/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:55.857 [161/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:55.857 [162/268] Linking static target lib/librte_hash.a 00:02:55.857 [163/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:56.116 [164/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:56.116 [165/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:56.374 [166/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:56.374 [167/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:56.374 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:56.374 [169/268] Linking static target lib/librte_dmadev.a 00:02:56.374 [170/268] Linking static target lib/librte_ethdev.a 00:02:56.374 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:56.632 [172/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:56.632 [173/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.632 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:56.890 [175/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:56.890 [176/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:56.890 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:56.890 [178/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.148 [179/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:57.148 [180/268] Linking static target lib/librte_cryptodev.a 00:02:57.148 [181/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:57.148 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:57.148 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:57.148 [184/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:57.406 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:57.406 [186/268] Linking static target lib/librte_power.a 00:02:57.663 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:57.664 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:57.664 [189/268] Linking static target lib/librte_reorder.a 00:02:57.664 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:57.922 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:57.922 [192/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:57.922 [193/268] Linking static target lib/librte_security.a 00:02:58.489 [194/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.489 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:58.754 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.754 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:58.754 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:58.754 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:59.022 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:59.022 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:59.281 [202/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:59.281 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:59.281 [204/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:59.281 [205/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:59.540 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:59.540 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:59.540 [208/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:59.540 [209/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:59.799 [210/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:59.799 [211/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:59.799 [212/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:59.799 [213/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:59.799 [214/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:59.799 [215/268] Linking static target drivers/librte_bus_pci.a 00:03:00.057 [216/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:00.057 [217/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:00.057 [218/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:00.057 [219/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:00.057 [220/268] Linking static target drivers/librte_bus_vdev.a 00:03:00.057 [221/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:00.316 [222/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:00.316 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:00.316 [224/268] Linking static target drivers/librte_mempool_ring.a 00:03:00.316 [225/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:00.316 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:00.574 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:01.958 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:02.891 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:02.891 [230/268] Linking target lib/librte_eal.so.24.1 00:03:03.149 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:03.149 [232/268] Linking target lib/librte_pci.so.24.1 00:03:03.149 [233/268] Linking target lib/librte_ring.so.24.1 00:03:03.149 [234/268] Linking target lib/librte_timer.so.24.1 00:03:03.149 [235/268] Linking target lib/librte_meter.so.24.1 00:03:03.149 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:03.149 [237/268] Linking target lib/librte_dmadev.so.24.1 00:03:03.407 [238/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:03.407 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:03.407 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:03.407 [241/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:03.407 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:03.407 [243/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:03.407 [244/268] Linking target lib/librte_mempool.so.24.1 00:03:03.407 [245/268] Linking target lib/librte_rcu.so.24.1 00:03:03.666 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:03.666 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:03.666 [248/268] Linking target lib/librte_mbuf.so.24.1 00:03:03.666 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:03.666 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:03.924 [251/268] Linking target lib/librte_net.so.24.1 00:03:03.924 [252/268] Linking target lib/librte_reorder.so.24.1 00:03:03.924 [253/268] Linking target lib/librte_cryptodev.so.24.1 00:03:03.924 [254/268] Linking target lib/librte_compressdev.so.24.1 00:03:03.924 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:03.924 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:03.924 [257/268] Linking target lib/librte_cmdline.so.24.1 00:03:03.924 [258/268] Linking target lib/librte_security.so.24.1 00:03:03.924 [259/268] Linking target lib/librte_hash.so.24.1 00:03:04.182 [260/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:05.557 [261/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:05.557 [262/268] Linking target lib/librte_ethdev.so.24.1 00:03:05.816 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:05.816 [264/268] Linking target lib/librte_power.so.24.1 00:03:06.751 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:06.751 [266/268] Linking static target lib/librte_vhost.a 00:03:09.304 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:09.304 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:09.304 INFO: autodetecting backend as ninja 00:03:09.304 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:03:31.242 CC lib/ut_mock/mock.o 00:03:31.242 CC lib/ut/ut.o 00:03:31.242 CC lib/log/log_deprecated.o 00:03:31.242 CC lib/log/log.o 00:03:31.242 CC lib/log/log_flags.o 00:03:31.242 LIB libspdk_ut.a 00:03:31.242 LIB libspdk_ut_mock.a 00:03:31.242 LIB libspdk_log.a 00:03:31.242 SO libspdk_ut.so.2.0 00:03:31.242 SO libspdk_ut_mock.so.6.0 00:03:31.242 SO libspdk_log.so.7.0 00:03:31.242 SYMLINK libspdk_ut.so 00:03:31.242 SYMLINK libspdk_ut_mock.so 00:03:31.242 SYMLINK libspdk_log.so 00:03:31.242 CC lib/dma/dma.o 00:03:31.242 CXX lib/trace_parser/trace.o 00:03:31.242 CC lib/ioat/ioat.o 00:03:31.242 CC lib/util/base64.o 00:03:31.242 CC lib/util/cpuset.o 00:03:31.242 CC lib/util/bit_array.o 00:03:31.242 CC lib/util/crc16.o 00:03:31.242 CC lib/util/crc32c.o 00:03:31.242 CC lib/util/crc32.o 00:03:31.242 CC lib/vfio_user/host/vfio_user_pci.o 00:03:31.242 CC lib/util/crc32_ieee.o 00:03:31.242 CC lib/util/crc64.o 00:03:31.242 CC lib/util/dif.o 00:03:31.242 CC lib/vfio_user/host/vfio_user.o 00:03:31.242 LIB libspdk_dma.a 00:03:31.242 SO libspdk_dma.so.5.0 00:03:31.242 CC lib/util/fd.o 00:03:31.242 CC lib/util/fd_group.o 00:03:31.242 SYMLINK libspdk_dma.so 00:03:31.242 CC lib/util/file.o 00:03:31.242 CC lib/util/hexlify.o 00:03:31.242 LIB libspdk_ioat.a 00:03:31.242 SO libspdk_ioat.so.7.0 00:03:31.242 CC lib/util/iov.o 00:03:31.242 CC lib/util/math.o 00:03:31.242 SYMLINK libspdk_ioat.so 00:03:31.242 CC lib/util/net.o 00:03:31.242 CC lib/util/pipe.o 00:03:31.242 LIB libspdk_vfio_user.a 00:03:31.242 CC lib/util/strerror_tls.o 00:03:31.242 CC lib/util/string.o 00:03:31.242 SO libspdk_vfio_user.so.5.0 00:03:31.242 CC lib/util/uuid.o 00:03:31.242 SYMLINK libspdk_vfio_user.so 00:03:31.242 CC lib/util/xor.o 00:03:31.242 CC lib/util/zipf.o 00:03:31.242 CC lib/util/md5.o 00:03:31.242 LIB libspdk_util.a 00:03:31.242 SO libspdk_util.so.10.0 00:03:31.242 LIB libspdk_trace_parser.a 00:03:31.242 SYMLINK libspdk_util.so 00:03:31.242 SO libspdk_trace_parser.so.6.0 00:03:31.242 SYMLINK libspdk_trace_parser.so 00:03:31.242 CC lib/conf/conf.o 00:03:31.242 CC lib/vmd/vmd.o 00:03:31.242 CC lib/idxd/idxd.o 00:03:31.242 CC lib/rdma_utils/rdma_utils.o 00:03:31.242 CC lib/idxd/idxd_user.o 00:03:31.242 CC lib/json/json_parse.o 00:03:31.242 CC lib/vmd/led.o 00:03:31.242 CC lib/json/json_util.o 00:03:31.242 CC lib/rdma_provider/common.o 00:03:31.242 CC lib/env_dpdk/env.o 00:03:31.242 CC lib/env_dpdk/memory.o 00:03:31.242 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:31.242 LIB libspdk_conf.a 00:03:31.242 CC lib/json/json_write.o 00:03:31.242 CC lib/env_dpdk/pci.o 00:03:31.242 SO libspdk_conf.so.6.0 00:03:31.242 CC lib/idxd/idxd_kernel.o 00:03:31.242 LIB libspdk_rdma_utils.a 00:03:31.242 SYMLINK libspdk_conf.so 00:03:31.242 CC lib/env_dpdk/init.o 00:03:31.242 SO libspdk_rdma_utils.so.1.0 00:03:31.242 LIB libspdk_rdma_provider.a 00:03:31.242 SYMLINK libspdk_rdma_utils.so 00:03:31.242 CC lib/env_dpdk/threads.o 00:03:31.242 SO libspdk_rdma_provider.so.6.0 00:03:31.242 CC lib/env_dpdk/pci_ioat.o 00:03:31.242 SYMLINK libspdk_rdma_provider.so 00:03:31.242 CC lib/env_dpdk/pci_virtio.o 00:03:31.242 CC lib/env_dpdk/pci_vmd.o 00:03:31.242 LIB libspdk_json.a 00:03:31.242 SO libspdk_json.so.6.0 00:03:31.242 CC lib/env_dpdk/pci_idxd.o 00:03:31.242 CC lib/env_dpdk/pci_event.o 00:03:31.501 CC lib/env_dpdk/sigbus_handler.o 00:03:31.501 SYMLINK libspdk_json.so 00:03:31.501 CC lib/env_dpdk/pci_dpdk.o 00:03:31.501 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:31.501 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:31.501 LIB libspdk_idxd.a 00:03:31.501 LIB libspdk_vmd.a 00:03:31.501 SO libspdk_idxd.so.12.1 00:03:31.501 SO libspdk_vmd.so.6.0 00:03:31.501 SYMLINK libspdk_idxd.so 00:03:31.501 SYMLINK libspdk_vmd.so 00:03:31.761 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:31.761 CC lib/jsonrpc/jsonrpc_server.o 00:03:31.761 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:31.761 CC lib/jsonrpc/jsonrpc_client.o 00:03:32.020 LIB libspdk_jsonrpc.a 00:03:32.020 SO libspdk_jsonrpc.so.6.0 00:03:32.020 SYMLINK libspdk_jsonrpc.so 00:03:32.589 LIB libspdk_env_dpdk.a 00:03:32.589 CC lib/rpc/rpc.o 00:03:32.589 SO libspdk_env_dpdk.so.15.0 00:03:32.589 SYMLINK libspdk_env_dpdk.so 00:03:32.589 LIB libspdk_rpc.a 00:03:32.858 SO libspdk_rpc.so.6.0 00:03:32.858 SYMLINK libspdk_rpc.so 00:03:33.121 CC lib/trace/trace.o 00:03:33.121 CC lib/trace/trace_flags.o 00:03:33.121 CC lib/trace/trace_rpc.o 00:03:33.121 CC lib/keyring/keyring_rpc.o 00:03:33.121 CC lib/keyring/keyring.o 00:03:33.121 CC lib/notify/notify.o 00:03:33.121 CC lib/notify/notify_rpc.o 00:03:33.380 LIB libspdk_notify.a 00:03:33.380 SO libspdk_notify.so.6.0 00:03:33.380 LIB libspdk_keyring.a 00:03:33.380 LIB libspdk_trace.a 00:03:33.380 SYMLINK libspdk_notify.so 00:03:33.639 SO libspdk_keyring.so.2.0 00:03:33.639 SO libspdk_trace.so.11.0 00:03:33.639 SYMLINK libspdk_keyring.so 00:03:33.639 SYMLINK libspdk_trace.so 00:03:33.899 CC lib/sock/sock.o 00:03:33.899 CC lib/sock/sock_rpc.o 00:03:33.899 CC lib/thread/thread.o 00:03:33.899 CC lib/thread/iobuf.o 00:03:34.466 LIB libspdk_sock.a 00:03:34.466 SO libspdk_sock.so.10.0 00:03:34.466 SYMLINK libspdk_sock.so 00:03:35.035 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:35.035 CC lib/nvme/nvme_ctrlr.o 00:03:35.035 CC lib/nvme/nvme_fabric.o 00:03:35.035 CC lib/nvme/nvme_ns_cmd.o 00:03:35.035 CC lib/nvme/nvme_ns.o 00:03:35.035 CC lib/nvme/nvme_pcie.o 00:03:35.035 CC lib/nvme/nvme_qpair.o 00:03:35.035 CC lib/nvme/nvme_pcie_common.o 00:03:35.035 CC lib/nvme/nvme.o 00:03:35.969 CC lib/nvme/nvme_quirks.o 00:03:35.969 LIB libspdk_thread.a 00:03:35.969 CC lib/nvme/nvme_transport.o 00:03:35.969 CC lib/nvme/nvme_discovery.o 00:03:35.969 SO libspdk_thread.so.10.2 00:03:35.969 SYMLINK libspdk_thread.so 00:03:35.969 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:35.969 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:35.969 CC lib/nvme/nvme_tcp.o 00:03:35.969 CC lib/nvme/nvme_opal.o 00:03:36.225 CC lib/accel/accel.o 00:03:36.225 CC lib/nvme/nvme_io_msg.o 00:03:36.225 CC lib/nvme/nvme_poll_group.o 00:03:36.491 CC lib/nvme/nvme_zns.o 00:03:36.491 CC lib/blob/blobstore.o 00:03:36.760 CC lib/init/json_config.o 00:03:36.760 CC lib/init/subsystem.o 00:03:36.760 CC lib/virtio/virtio.o 00:03:37.018 CC lib/virtio/virtio_vhost_user.o 00:03:37.018 CC lib/virtio/virtio_vfio_user.o 00:03:37.018 CC lib/init/subsystem_rpc.o 00:03:37.018 CC lib/virtio/virtio_pci.o 00:03:37.018 CC lib/init/rpc.o 00:03:37.277 CC lib/nvme/nvme_stubs.o 00:03:37.277 CC lib/blob/request.o 00:03:37.277 LIB libspdk_init.a 00:03:37.277 CC lib/fsdev/fsdev.o 00:03:37.277 CC lib/fsdev/fsdev_io.o 00:03:37.277 SO libspdk_init.so.6.0 00:03:37.277 LIB libspdk_virtio.a 00:03:37.536 SO libspdk_virtio.so.7.0 00:03:37.536 SYMLINK libspdk_init.so 00:03:37.536 CC lib/nvme/nvme_auth.o 00:03:37.536 SYMLINK libspdk_virtio.so 00:03:37.536 CC lib/fsdev/fsdev_rpc.o 00:03:37.536 CC lib/accel/accel_rpc.o 00:03:37.536 CC lib/blob/zeroes.o 00:03:37.795 CC lib/blob/blob_bs_dev.o 00:03:37.795 CC lib/event/app.o 00:03:37.795 CC lib/accel/accel_sw.o 00:03:37.795 CC lib/event/reactor.o 00:03:37.795 CC lib/nvme/nvme_cuse.o 00:03:37.795 CC lib/nvme/nvme_rdma.o 00:03:37.795 CC lib/event/log_rpc.o 00:03:38.053 CC lib/event/app_rpc.o 00:03:38.053 CC lib/event/scheduler_static.o 00:03:38.053 LIB libspdk_fsdev.a 00:03:38.053 LIB libspdk_accel.a 00:03:38.053 SO libspdk_fsdev.so.1.0 00:03:38.312 SO libspdk_accel.so.16.0 00:03:38.312 SYMLINK libspdk_fsdev.so 00:03:38.312 SYMLINK libspdk_accel.so 00:03:38.312 LIB libspdk_event.a 00:03:38.312 SO libspdk_event.so.15.0 00:03:38.312 SYMLINK libspdk_event.so 00:03:38.571 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:38.571 CC lib/bdev/bdev_rpc.o 00:03:38.571 CC lib/bdev/bdev.o 00:03:38.571 CC lib/bdev/bdev_zone.o 00:03:38.571 CC lib/bdev/scsi_nvme.o 00:03:38.571 CC lib/bdev/part.o 00:03:39.508 LIB libspdk_fuse_dispatcher.a 00:03:39.508 SO libspdk_fuse_dispatcher.so.1.0 00:03:39.508 LIB libspdk_nvme.a 00:03:39.508 SYMLINK libspdk_fuse_dispatcher.so 00:03:39.508 SO libspdk_nvme.so.14.0 00:03:39.766 SYMLINK libspdk_nvme.so 00:03:40.703 LIB libspdk_blob.a 00:03:40.963 SO libspdk_blob.so.11.0 00:03:40.963 SYMLINK libspdk_blob.so 00:03:41.529 CC lib/blobfs/blobfs.o 00:03:41.529 CC lib/blobfs/tree.o 00:03:41.529 CC lib/lvol/lvol.o 00:03:41.787 LIB libspdk_bdev.a 00:03:42.044 SO libspdk_bdev.so.17.0 00:03:42.044 SYMLINK libspdk_bdev.so 00:03:42.302 CC lib/nbd/nbd.o 00:03:42.302 CC lib/nbd/nbd_rpc.o 00:03:42.302 CC lib/scsi/dev.o 00:03:42.302 CC lib/scsi/port.o 00:03:42.302 CC lib/scsi/lun.o 00:03:42.302 CC lib/ftl/ftl_core.o 00:03:42.302 CC lib/nvmf/ctrlr.o 00:03:42.302 CC lib/ublk/ublk.o 00:03:42.302 LIB libspdk_blobfs.a 00:03:42.561 SO libspdk_blobfs.so.10.0 00:03:42.561 CC lib/scsi/scsi.o 00:03:42.561 LIB libspdk_lvol.a 00:03:42.561 SO libspdk_lvol.so.10.0 00:03:42.561 SYMLINK libspdk_blobfs.so 00:03:42.561 CC lib/scsi/scsi_bdev.o 00:03:42.561 CC lib/ftl/ftl_init.o 00:03:42.561 SYMLINK libspdk_lvol.so 00:03:42.561 CC lib/ftl/ftl_layout.o 00:03:42.561 CC lib/ftl/ftl_debug.o 00:03:42.561 CC lib/ftl/ftl_io.o 00:03:42.820 CC lib/ftl/ftl_sb.o 00:03:42.820 CC lib/scsi/scsi_pr.o 00:03:42.820 LIB libspdk_nbd.a 00:03:42.820 SO libspdk_nbd.so.7.0 00:03:42.820 CC lib/ftl/ftl_l2p.o 00:03:42.820 CC lib/ftl/ftl_l2p_flat.o 00:03:43.082 SYMLINK libspdk_nbd.so 00:03:43.082 CC lib/ftl/ftl_nv_cache.o 00:03:43.082 CC lib/scsi/scsi_rpc.o 00:03:43.082 CC lib/ftl/ftl_band.o 00:03:43.082 CC lib/ftl/ftl_band_ops.o 00:03:43.082 CC lib/ftl/ftl_writer.o 00:03:43.082 CC lib/scsi/task.o 00:03:43.082 CC lib/ublk/ublk_rpc.o 00:03:43.082 CC lib/ftl/ftl_rq.o 00:03:43.340 CC lib/nvmf/ctrlr_discovery.o 00:03:43.340 CC lib/nvmf/ctrlr_bdev.o 00:03:43.340 LIB libspdk_ublk.a 00:03:43.340 SO libspdk_ublk.so.3.0 00:03:43.340 LIB libspdk_scsi.a 00:03:43.340 CC lib/ftl/ftl_reloc.o 00:03:43.340 CC lib/nvmf/subsystem.o 00:03:43.597 SO libspdk_scsi.so.9.0 00:03:43.597 SYMLINK libspdk_ublk.so 00:03:43.597 CC lib/nvmf/nvmf.o 00:03:43.597 CC lib/nvmf/nvmf_rpc.o 00:03:43.597 CC lib/ftl/ftl_l2p_cache.o 00:03:43.597 SYMLINK libspdk_scsi.so 00:03:43.597 CC lib/ftl/ftl_p2l.o 00:03:43.855 CC lib/nvmf/transport.o 00:03:43.855 CC lib/nvmf/tcp.o 00:03:44.113 CC lib/ftl/ftl_p2l_log.o 00:03:44.113 CC lib/ftl/mngt/ftl_mngt.o 00:03:44.113 CC lib/nvmf/stubs.o 00:03:44.113 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:44.370 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:44.370 CC lib/nvmf/mdns_server.o 00:03:44.370 CC lib/nvmf/rdma.o 00:03:44.370 CC lib/nvmf/auth.o 00:03:44.627 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:44.627 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:44.627 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:44.627 CC lib/iscsi/conn.o 00:03:44.884 CC lib/iscsi/init_grp.o 00:03:44.884 CC lib/vhost/vhost.o 00:03:44.884 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:44.884 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:44.884 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:44.884 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:45.143 CC lib/iscsi/iscsi.o 00:03:45.143 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:45.143 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:45.143 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:45.143 CC lib/ftl/utils/ftl_conf.o 00:03:45.401 CC lib/ftl/utils/ftl_md.o 00:03:45.401 CC lib/ftl/utils/ftl_mempool.o 00:03:45.401 CC lib/vhost/vhost_rpc.o 00:03:45.401 CC lib/iscsi/param.o 00:03:45.401 CC lib/ftl/utils/ftl_bitmap.o 00:03:45.401 CC lib/vhost/vhost_scsi.o 00:03:45.659 CC lib/vhost/vhost_blk.o 00:03:45.659 CC lib/vhost/rte_vhost_user.o 00:03:45.659 CC lib/iscsi/portal_grp.o 00:03:45.659 CC lib/iscsi/tgt_node.o 00:03:45.659 CC lib/ftl/utils/ftl_property.o 00:03:45.659 CC lib/iscsi/iscsi_subsystem.o 00:03:45.917 CC lib/iscsi/iscsi_rpc.o 00:03:45.917 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:45.917 CC lib/iscsi/task.o 00:03:46.176 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:46.176 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:46.176 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:46.176 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:46.434 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:46.434 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:46.434 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:46.434 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:46.434 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:46.434 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:46.434 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:46.725 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:46.725 CC lib/ftl/base/ftl_base_dev.o 00:03:46.725 CC lib/ftl/base/ftl_base_bdev.o 00:03:46.725 CC lib/ftl/ftl_trace.o 00:03:46.725 LIB libspdk_iscsi.a 00:03:46.725 LIB libspdk_vhost.a 00:03:46.725 SO libspdk_iscsi.so.8.0 00:03:46.725 SO libspdk_vhost.so.8.0 00:03:46.986 LIB libspdk_ftl.a 00:03:46.986 SYMLINK libspdk_vhost.so 00:03:46.986 SYMLINK libspdk_iscsi.so 00:03:46.986 LIB libspdk_nvmf.a 00:03:47.248 SO libspdk_ftl.so.9.0 00:03:47.248 SO libspdk_nvmf.so.19.0 00:03:47.509 SYMLINK libspdk_ftl.so 00:03:47.509 SYMLINK libspdk_nvmf.so 00:03:48.076 CC module/env_dpdk/env_dpdk_rpc.o 00:03:48.076 CC module/accel/ioat/accel_ioat.o 00:03:48.076 CC module/sock/posix/posix.o 00:03:48.076 CC module/accel/dsa/accel_dsa.o 00:03:48.076 CC module/accel/iaa/accel_iaa.o 00:03:48.076 CC module/fsdev/aio/fsdev_aio.o 00:03:48.076 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:48.076 CC module/accel/error/accel_error.o 00:03:48.076 CC module/blob/bdev/blob_bdev.o 00:03:48.076 CC module/keyring/file/keyring.o 00:03:48.076 LIB libspdk_env_dpdk_rpc.a 00:03:48.076 SO libspdk_env_dpdk_rpc.so.6.0 00:03:48.076 SYMLINK libspdk_env_dpdk_rpc.so 00:03:48.076 CC module/accel/error/accel_error_rpc.o 00:03:48.334 CC module/accel/ioat/accel_ioat_rpc.o 00:03:48.334 CC module/keyring/file/keyring_rpc.o 00:03:48.334 CC module/accel/iaa/accel_iaa_rpc.o 00:03:48.334 LIB libspdk_scheduler_dynamic.a 00:03:48.334 SO libspdk_scheduler_dynamic.so.4.0 00:03:48.334 LIB libspdk_accel_error.a 00:03:48.334 SO libspdk_accel_error.so.2.0 00:03:48.334 SYMLINK libspdk_scheduler_dynamic.so 00:03:48.334 CC module/accel/dsa/accel_dsa_rpc.o 00:03:48.334 LIB libspdk_accel_ioat.a 00:03:48.334 LIB libspdk_keyring_file.a 00:03:48.334 LIB libspdk_blob_bdev.a 00:03:48.334 LIB libspdk_accel_iaa.a 00:03:48.334 SO libspdk_accel_ioat.so.6.0 00:03:48.334 CC module/keyring/linux/keyring.o 00:03:48.334 SO libspdk_keyring_file.so.2.0 00:03:48.334 SO libspdk_blob_bdev.so.11.0 00:03:48.334 SYMLINK libspdk_accel_error.so 00:03:48.334 SO libspdk_accel_iaa.so.3.0 00:03:48.592 SYMLINK libspdk_keyring_file.so 00:03:48.592 SYMLINK libspdk_blob_bdev.so 00:03:48.592 CC module/keyring/linux/keyring_rpc.o 00:03:48.592 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:48.592 SYMLINK libspdk_accel_ioat.so 00:03:48.592 SYMLINK libspdk_accel_iaa.so 00:03:48.592 LIB libspdk_accel_dsa.a 00:03:48.592 SO libspdk_accel_dsa.so.5.0 00:03:48.592 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:48.592 CC module/fsdev/aio/linux_aio_mgr.o 00:03:48.592 SYMLINK libspdk_accel_dsa.so 00:03:48.592 LIB libspdk_keyring_linux.a 00:03:48.592 SO libspdk_keyring_linux.so.1.0 00:03:48.592 CC module/scheduler/gscheduler/gscheduler.o 00:03:48.851 LIB libspdk_scheduler_dpdk_governor.a 00:03:48.851 SYMLINK libspdk_keyring_linux.so 00:03:48.851 CC module/bdev/delay/vbdev_delay.o 00:03:48.851 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:48.851 CC module/blobfs/bdev/blobfs_bdev.o 00:03:48.851 CC module/bdev/error/vbdev_error.o 00:03:48.851 CC module/bdev/error/vbdev_error_rpc.o 00:03:48.851 CC module/bdev/gpt/gpt.o 00:03:48.851 LIB libspdk_scheduler_gscheduler.a 00:03:48.851 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:48.851 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:48.851 LIB libspdk_fsdev_aio.a 00:03:48.851 SO libspdk_scheduler_gscheduler.so.4.0 00:03:48.851 SO libspdk_fsdev_aio.so.1.0 00:03:48.851 CC module/bdev/lvol/vbdev_lvol.o 00:03:48.851 LIB libspdk_sock_posix.a 00:03:48.851 SYMLINK libspdk_scheduler_gscheduler.so 00:03:48.851 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:49.109 SO libspdk_sock_posix.so.6.0 00:03:49.109 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:49.109 SYMLINK libspdk_fsdev_aio.so 00:03:49.109 CC module/bdev/gpt/vbdev_gpt.o 00:03:49.109 SYMLINK libspdk_sock_posix.so 00:03:49.109 LIB libspdk_bdev_error.a 00:03:49.109 SO libspdk_bdev_error.so.6.0 00:03:49.109 LIB libspdk_bdev_delay.a 00:03:49.109 LIB libspdk_blobfs_bdev.a 00:03:49.109 CC module/bdev/malloc/bdev_malloc.o 00:03:49.109 SO libspdk_bdev_delay.so.6.0 00:03:49.109 SO libspdk_blobfs_bdev.so.6.0 00:03:49.109 CC module/bdev/null/bdev_null.o 00:03:49.109 CC module/bdev/nvme/bdev_nvme.o 00:03:49.109 CC module/bdev/passthru/vbdev_passthru.o 00:03:49.367 SYMLINK libspdk_bdev_error.so 00:03:49.367 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:49.367 SYMLINK libspdk_bdev_delay.so 00:03:49.367 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:49.367 SYMLINK libspdk_blobfs_bdev.so 00:03:49.367 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:49.367 LIB libspdk_bdev_gpt.a 00:03:49.367 SO libspdk_bdev_gpt.so.6.0 00:03:49.367 SYMLINK libspdk_bdev_gpt.so 00:03:49.367 CC module/bdev/nvme/nvme_rpc.o 00:03:49.367 CC module/bdev/null/bdev_null_rpc.o 00:03:49.625 CC module/bdev/nvme/bdev_mdns_client.o 00:03:49.625 LIB libspdk_bdev_passthru.a 00:03:49.625 CC module/bdev/raid/bdev_raid.o 00:03:49.625 LIB libspdk_bdev_lvol.a 00:03:49.625 SO libspdk_bdev_passthru.so.6.0 00:03:49.625 SO libspdk_bdev_lvol.so.6.0 00:03:49.625 CC module/bdev/split/vbdev_split.o 00:03:49.625 LIB libspdk_bdev_null.a 00:03:49.625 LIB libspdk_bdev_malloc.a 00:03:49.625 SYMLINK libspdk_bdev_passthru.so 00:03:49.625 SO libspdk_bdev_null.so.6.0 00:03:49.625 SYMLINK libspdk_bdev_lvol.so 00:03:49.625 CC module/bdev/raid/bdev_raid_rpc.o 00:03:49.625 CC module/bdev/raid/bdev_raid_sb.o 00:03:49.625 SO libspdk_bdev_malloc.so.6.0 00:03:49.625 CC module/bdev/raid/raid0.o 00:03:49.625 CC module/bdev/raid/raid1.o 00:03:49.625 SYMLINK libspdk_bdev_null.so 00:03:49.625 CC module/bdev/raid/concat.o 00:03:49.883 SYMLINK libspdk_bdev_malloc.so 00:03:49.883 CC module/bdev/nvme/vbdev_opal.o 00:03:49.883 CC module/bdev/split/vbdev_split_rpc.o 00:03:49.883 CC module/bdev/raid/raid5f.o 00:03:49.883 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:49.883 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:50.141 LIB libspdk_bdev_split.a 00:03:50.141 SO libspdk_bdev_split.so.6.0 00:03:50.141 CC module/bdev/aio/bdev_aio.o 00:03:50.141 SYMLINK libspdk_bdev_split.so 00:03:50.141 CC module/bdev/aio/bdev_aio_rpc.o 00:03:50.141 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:50.141 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:50.141 CC module/bdev/ftl/bdev_ftl.o 00:03:50.438 CC module/bdev/iscsi/bdev_iscsi.o 00:03:50.438 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:50.438 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:50.438 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:50.438 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:50.697 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:50.697 LIB libspdk_bdev_zone_block.a 00:03:50.697 LIB libspdk_bdev_aio.a 00:03:50.697 SO libspdk_bdev_zone_block.so.6.0 00:03:50.697 SO libspdk_bdev_aio.so.6.0 00:03:50.697 LIB libspdk_bdev_ftl.a 00:03:50.697 SYMLINK libspdk_bdev_zone_block.so 00:03:50.697 SO libspdk_bdev_ftl.so.6.0 00:03:50.697 SYMLINK libspdk_bdev_aio.so 00:03:50.697 LIB libspdk_bdev_iscsi.a 00:03:50.697 SYMLINK libspdk_bdev_ftl.so 00:03:50.697 SO libspdk_bdev_iscsi.so.6.0 00:03:50.955 LIB libspdk_bdev_raid.a 00:03:50.955 SYMLINK libspdk_bdev_iscsi.so 00:03:50.955 SO libspdk_bdev_raid.so.6.0 00:03:50.955 SYMLINK libspdk_bdev_raid.so 00:03:51.214 LIB libspdk_bdev_virtio.a 00:03:51.214 SO libspdk_bdev_virtio.so.6.0 00:03:51.214 SYMLINK libspdk_bdev_virtio.so 00:03:52.153 LIB libspdk_bdev_nvme.a 00:03:52.153 SO libspdk_bdev_nvme.so.7.0 00:03:52.153 SYMLINK libspdk_bdev_nvme.so 00:03:52.722 CC module/event/subsystems/sock/sock.o 00:03:52.722 CC module/event/subsystems/fsdev/fsdev.o 00:03:52.722 CC module/event/subsystems/scheduler/scheduler.o 00:03:52.722 CC module/event/subsystems/keyring/keyring.o 00:03:52.722 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:52.722 CC module/event/subsystems/vmd/vmd.o 00:03:52.981 CC module/event/subsystems/iobuf/iobuf.o 00:03:52.981 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:52.981 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:52.981 LIB libspdk_event_fsdev.a 00:03:52.981 LIB libspdk_event_scheduler.a 00:03:52.981 LIB libspdk_event_sock.a 00:03:52.981 LIB libspdk_event_keyring.a 00:03:52.981 LIB libspdk_event_vmd.a 00:03:52.981 LIB libspdk_event_vhost_blk.a 00:03:52.981 SO libspdk_event_scheduler.so.4.0 00:03:52.981 SO libspdk_event_fsdev.so.1.0 00:03:52.981 LIB libspdk_event_iobuf.a 00:03:52.981 SO libspdk_event_sock.so.5.0 00:03:52.981 SO libspdk_event_keyring.so.1.0 00:03:52.981 SO libspdk_event_vhost_blk.so.3.0 00:03:52.981 SO libspdk_event_vmd.so.6.0 00:03:52.981 SO libspdk_event_iobuf.so.3.0 00:03:52.981 SYMLINK libspdk_event_sock.so 00:03:52.981 SYMLINK libspdk_event_scheduler.so 00:03:52.981 SYMLINK libspdk_event_fsdev.so 00:03:52.981 SYMLINK libspdk_event_keyring.so 00:03:52.981 SYMLINK libspdk_event_vhost_blk.so 00:03:52.981 SYMLINK libspdk_event_vmd.so 00:03:52.981 SYMLINK libspdk_event_iobuf.so 00:03:53.548 CC module/event/subsystems/accel/accel.o 00:03:53.548 LIB libspdk_event_accel.a 00:03:53.806 SO libspdk_event_accel.so.6.0 00:03:53.806 SYMLINK libspdk_event_accel.so 00:03:54.065 CC module/event/subsystems/bdev/bdev.o 00:03:54.323 LIB libspdk_event_bdev.a 00:03:54.323 SO libspdk_event_bdev.so.6.0 00:03:54.323 SYMLINK libspdk_event_bdev.so 00:03:54.901 CC module/event/subsystems/nbd/nbd.o 00:03:54.901 CC module/event/subsystems/ublk/ublk.o 00:03:54.901 CC module/event/subsystems/scsi/scsi.o 00:03:54.901 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:54.901 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:54.901 LIB libspdk_event_nbd.a 00:03:54.901 LIB libspdk_event_ublk.a 00:03:54.901 SO libspdk_event_nbd.so.6.0 00:03:54.901 LIB libspdk_event_scsi.a 00:03:54.901 SO libspdk_event_ublk.so.3.0 00:03:54.901 SO libspdk_event_scsi.so.6.0 00:03:54.901 SYMLINK libspdk_event_nbd.so 00:03:55.168 SYMLINK libspdk_event_ublk.so 00:03:55.168 SYMLINK libspdk_event_scsi.so 00:03:55.168 LIB libspdk_event_nvmf.a 00:03:55.168 SO libspdk_event_nvmf.so.6.0 00:03:55.168 SYMLINK libspdk_event_nvmf.so 00:03:55.427 CC module/event/subsystems/iscsi/iscsi.o 00:03:55.427 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:55.427 LIB libspdk_event_iscsi.a 00:03:55.687 LIB libspdk_event_vhost_scsi.a 00:03:55.687 SO libspdk_event_iscsi.so.6.0 00:03:55.687 SO libspdk_event_vhost_scsi.so.3.0 00:03:55.687 SYMLINK libspdk_event_iscsi.so 00:03:55.687 SYMLINK libspdk_event_vhost_scsi.so 00:03:55.946 SO libspdk.so.6.0 00:03:55.946 SYMLINK libspdk.so 00:03:56.205 CXX app/trace/trace.o 00:03:56.205 CC app/spdk_nvme_perf/perf.o 00:03:56.205 CC app/trace_record/trace_record.o 00:03:56.205 CC app/spdk_lspci/spdk_lspci.o 00:03:56.205 CC app/nvmf_tgt/nvmf_main.o 00:03:56.206 CC app/iscsi_tgt/iscsi_tgt.o 00:03:56.206 CC app/spdk_tgt/spdk_tgt.o 00:03:56.206 CC examples/ioat/perf/perf.o 00:03:56.206 CC examples/util/zipf/zipf.o 00:03:56.206 CC test/thread/poller_perf/poller_perf.o 00:03:56.465 LINK spdk_lspci 00:03:56.465 LINK nvmf_tgt 00:03:56.465 LINK iscsi_tgt 00:03:56.465 LINK poller_perf 00:03:56.465 LINK spdk_trace_record 00:03:56.465 LINK zipf 00:03:56.465 LINK spdk_tgt 00:03:56.465 LINK ioat_perf 00:03:56.724 LINK spdk_trace 00:03:56.724 CC app/spdk_nvme_identify/identify.o 00:03:56.724 TEST_HEADER include/spdk/accel.h 00:03:56.724 TEST_HEADER include/spdk/accel_module.h 00:03:56.724 TEST_HEADER include/spdk/assert.h 00:03:56.724 TEST_HEADER include/spdk/barrier.h 00:03:56.724 TEST_HEADER include/spdk/base64.h 00:03:56.724 CC examples/ioat/verify/verify.o 00:03:56.724 TEST_HEADER include/spdk/bdev.h 00:03:56.724 TEST_HEADER include/spdk/bdev_module.h 00:03:56.724 TEST_HEADER include/spdk/bdev_zone.h 00:03:56.724 TEST_HEADER include/spdk/bit_array.h 00:03:56.724 TEST_HEADER include/spdk/bit_pool.h 00:03:56.724 TEST_HEADER include/spdk/blob_bdev.h 00:03:56.724 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:56.724 TEST_HEADER include/spdk/blobfs.h 00:03:56.724 TEST_HEADER include/spdk/blob.h 00:03:56.724 TEST_HEADER include/spdk/conf.h 00:03:56.724 TEST_HEADER include/spdk/config.h 00:03:56.724 TEST_HEADER include/spdk/cpuset.h 00:03:56.724 TEST_HEADER include/spdk/crc16.h 00:03:56.724 TEST_HEADER include/spdk/crc32.h 00:03:56.724 TEST_HEADER include/spdk/crc64.h 00:03:56.724 TEST_HEADER include/spdk/dif.h 00:03:56.724 TEST_HEADER include/spdk/dma.h 00:03:56.724 TEST_HEADER include/spdk/endian.h 00:03:56.724 TEST_HEADER include/spdk/env_dpdk.h 00:03:56.724 TEST_HEADER include/spdk/env.h 00:03:56.724 TEST_HEADER include/spdk/event.h 00:03:56.724 TEST_HEADER include/spdk/fd_group.h 00:03:56.724 TEST_HEADER include/spdk/fd.h 00:03:56.724 TEST_HEADER include/spdk/file.h 00:03:56.724 TEST_HEADER include/spdk/fsdev.h 00:03:56.724 TEST_HEADER include/spdk/fsdev_module.h 00:03:56.724 TEST_HEADER include/spdk/ftl.h 00:03:56.724 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:56.724 TEST_HEADER include/spdk/gpt_spec.h 00:03:56.724 TEST_HEADER include/spdk/hexlify.h 00:03:56.724 TEST_HEADER include/spdk/histogram_data.h 00:03:56.724 TEST_HEADER include/spdk/idxd.h 00:03:56.724 TEST_HEADER include/spdk/idxd_spec.h 00:03:56.724 TEST_HEADER include/spdk/init.h 00:03:56.724 TEST_HEADER include/spdk/ioat.h 00:03:56.724 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:56.725 TEST_HEADER include/spdk/ioat_spec.h 00:03:56.725 CC test/dma/test_dma/test_dma.o 00:03:56.725 TEST_HEADER include/spdk/iscsi_spec.h 00:03:56.725 TEST_HEADER include/spdk/json.h 00:03:56.725 TEST_HEADER include/spdk/jsonrpc.h 00:03:56.725 TEST_HEADER include/spdk/keyring.h 00:03:56.725 TEST_HEADER include/spdk/keyring_module.h 00:03:56.725 TEST_HEADER include/spdk/likely.h 00:03:56.725 TEST_HEADER include/spdk/log.h 00:03:56.725 TEST_HEADER include/spdk/lvol.h 00:03:56.725 TEST_HEADER include/spdk/md5.h 00:03:56.725 TEST_HEADER include/spdk/memory.h 00:03:56.725 TEST_HEADER include/spdk/mmio.h 00:03:56.725 CC test/app/bdev_svc/bdev_svc.o 00:03:56.725 TEST_HEADER include/spdk/nbd.h 00:03:56.725 TEST_HEADER include/spdk/net.h 00:03:56.725 TEST_HEADER include/spdk/notify.h 00:03:56.725 TEST_HEADER include/spdk/nvme.h 00:03:56.725 TEST_HEADER include/spdk/nvme_intel.h 00:03:56.725 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:56.725 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:56.725 TEST_HEADER include/spdk/nvme_spec.h 00:03:56.984 TEST_HEADER include/spdk/nvme_zns.h 00:03:56.984 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:56.984 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:56.984 TEST_HEADER include/spdk/nvmf.h 00:03:56.984 TEST_HEADER include/spdk/nvmf_spec.h 00:03:56.984 TEST_HEADER include/spdk/nvmf_transport.h 00:03:56.984 TEST_HEADER include/spdk/opal.h 00:03:56.984 TEST_HEADER include/spdk/opal_spec.h 00:03:56.984 TEST_HEADER include/spdk/pci_ids.h 00:03:56.984 TEST_HEADER include/spdk/pipe.h 00:03:56.984 TEST_HEADER include/spdk/queue.h 00:03:56.984 CC test/env/vtophys/vtophys.o 00:03:56.984 TEST_HEADER include/spdk/reduce.h 00:03:56.984 TEST_HEADER include/spdk/rpc.h 00:03:56.984 TEST_HEADER include/spdk/scheduler.h 00:03:56.984 TEST_HEADER include/spdk/scsi.h 00:03:56.984 TEST_HEADER include/spdk/scsi_spec.h 00:03:56.984 TEST_HEADER include/spdk/sock.h 00:03:56.984 TEST_HEADER include/spdk/stdinc.h 00:03:56.984 TEST_HEADER include/spdk/string.h 00:03:56.984 TEST_HEADER include/spdk/thread.h 00:03:56.984 TEST_HEADER include/spdk/trace.h 00:03:56.984 TEST_HEADER include/spdk/trace_parser.h 00:03:56.984 TEST_HEADER include/spdk/tree.h 00:03:56.984 TEST_HEADER include/spdk/ublk.h 00:03:56.984 TEST_HEADER include/spdk/util.h 00:03:56.984 CC test/env/mem_callbacks/mem_callbacks.o 00:03:56.984 TEST_HEADER include/spdk/uuid.h 00:03:56.984 TEST_HEADER include/spdk/version.h 00:03:56.984 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:56.984 CC examples/thread/thread/thread_ex.o 00:03:56.984 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:56.984 TEST_HEADER include/spdk/vhost.h 00:03:56.984 TEST_HEADER include/spdk/vmd.h 00:03:56.984 TEST_HEADER include/spdk/xor.h 00:03:56.984 TEST_HEADER include/spdk/zipf.h 00:03:56.984 CXX test/cpp_headers/accel.o 00:03:56.984 LINK verify 00:03:56.984 LINK bdev_svc 00:03:56.984 LINK interrupt_tgt 00:03:56.984 LINK vtophys 00:03:56.984 CXX test/cpp_headers/accel_module.o 00:03:57.244 LINK spdk_nvme_perf 00:03:57.244 LINK thread 00:03:57.244 CXX test/cpp_headers/assert.o 00:03:57.244 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:57.244 CC examples/sock/hello_world/hello_sock.o 00:03:57.244 CC test/app/histogram_perf/histogram_perf.o 00:03:57.504 LINK test_dma 00:03:57.504 CXX test/cpp_headers/barrier.o 00:03:57.504 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:57.504 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:57.504 LINK histogram_perf 00:03:57.504 LINK mem_callbacks 00:03:57.504 LINK env_dpdk_post_init 00:03:57.504 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:57.504 CXX test/cpp_headers/base64.o 00:03:57.504 LINK hello_sock 00:03:57.504 LINK spdk_nvme_identify 00:03:57.504 CXX test/cpp_headers/bdev.o 00:03:57.763 CC app/spdk_nvme_discover/discovery_aer.o 00:03:57.763 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:57.763 CC test/env/memory/memory_ut.o 00:03:57.763 CC test/env/pci/pci_ut.o 00:03:57.763 CXX test/cpp_headers/bdev_module.o 00:03:58.023 CC test/event/event_perf/event_perf.o 00:03:58.023 LINK nvme_fuzz 00:03:58.023 CC examples/vmd/lsvmd/lsvmd.o 00:03:58.023 LINK spdk_nvme_discover 00:03:58.023 CC examples/idxd/perf/perf.o 00:03:58.023 CXX test/cpp_headers/bdev_zone.o 00:03:58.023 LINK event_perf 00:03:58.023 LINK lsvmd 00:03:58.023 CC app/spdk_top/spdk_top.o 00:03:58.282 CC test/event/reactor/reactor.o 00:03:58.282 LINK vhost_fuzz 00:03:58.282 CXX test/cpp_headers/bit_array.o 00:03:58.282 LINK pci_ut 00:03:58.282 LINK idxd_perf 00:03:58.282 CC test/event/reactor_perf/reactor_perf.o 00:03:58.282 CC examples/vmd/led/led.o 00:03:58.282 LINK reactor 00:03:58.282 CXX test/cpp_headers/bit_pool.o 00:03:58.282 CXX test/cpp_headers/blob_bdev.o 00:03:58.559 LINK reactor_perf 00:03:58.559 CXX test/cpp_headers/blobfs_bdev.o 00:03:58.559 LINK led 00:03:58.559 CC test/event/app_repeat/app_repeat.o 00:03:58.559 CC test/app/jsoncat/jsoncat.o 00:03:58.559 CC app/spdk_dd/spdk_dd.o 00:03:58.559 CC app/vhost/vhost.o 00:03:58.559 CXX test/cpp_headers/blobfs.o 00:03:58.559 CC test/app/stub/stub.o 00:03:58.849 LINK app_repeat 00:03:58.849 LINK jsoncat 00:03:58.849 CXX test/cpp_headers/blob.o 00:03:58.849 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:58.849 LINK vhost 00:03:58.849 LINK stub 00:03:58.849 LINK memory_ut 00:03:58.849 CXX test/cpp_headers/conf.o 00:03:59.108 CC test/event/scheduler/scheduler.o 00:03:59.108 LINK spdk_dd 00:03:59.108 LINK spdk_top 00:03:59.108 CC examples/accel/perf/accel_perf.o 00:03:59.108 LINK hello_fsdev 00:03:59.108 CXX test/cpp_headers/config.o 00:03:59.108 CXX test/cpp_headers/cpuset.o 00:03:59.108 CXX test/cpp_headers/crc16.o 00:03:59.108 CC app/fio/nvme/fio_plugin.o 00:03:59.108 CC examples/blob/hello_world/hello_blob.o 00:03:59.368 LINK scheduler 00:03:59.368 CC examples/blob/cli/blobcli.o 00:03:59.368 CXX test/cpp_headers/crc32.o 00:03:59.368 CC test/nvme/aer/aer.o 00:03:59.368 CC test/nvme/reset/reset.o 00:03:59.368 LINK iscsi_fuzz 00:03:59.368 CC app/fio/bdev/fio_plugin.o 00:03:59.368 CXX test/cpp_headers/crc64.o 00:03:59.368 LINK hello_blob 00:03:59.627 CC test/nvme/sgl/sgl.o 00:03:59.627 CXX test/cpp_headers/dif.o 00:03:59.627 LINK aer 00:03:59.627 LINK accel_perf 00:03:59.627 LINK reset 00:03:59.627 CC test/rpc_client/rpc_client_test.o 00:03:59.886 CXX test/cpp_headers/dma.o 00:03:59.886 LINK spdk_nvme 00:03:59.886 CC test/accel/dif/dif.o 00:03:59.886 LINK blobcli 00:03:59.886 LINK sgl 00:03:59.886 CC test/nvme/e2edp/nvme_dp.o 00:03:59.886 LINK rpc_client_test 00:03:59.886 CXX test/cpp_headers/endian.o 00:03:59.886 LINK spdk_bdev 00:04:00.145 CC examples/nvme/hello_world/hello_world.o 00:04:00.145 CC test/blobfs/mkfs/mkfs.o 00:04:00.145 CXX test/cpp_headers/env_dpdk.o 00:04:00.145 CC test/nvme/overhead/overhead.o 00:04:00.145 CC examples/bdev/hello_world/hello_bdev.o 00:04:00.145 CC examples/bdev/bdevperf/bdevperf.o 00:04:00.145 CC test/nvme/err_injection/err_injection.o 00:04:00.145 CC test/nvme/startup/startup.o 00:04:00.145 LINK nvme_dp 00:04:00.145 LINK hello_world 00:04:00.145 CXX test/cpp_headers/env.o 00:04:00.145 LINK mkfs 00:04:00.404 LINK startup 00:04:00.404 LINK err_injection 00:04:00.404 LINK hello_bdev 00:04:00.404 CXX test/cpp_headers/event.o 00:04:00.404 LINK overhead 00:04:00.404 CC examples/nvme/reconnect/reconnect.o 00:04:00.404 CC test/nvme/reserve/reserve.o 00:04:00.662 CC test/nvme/simple_copy/simple_copy.o 00:04:00.662 CXX test/cpp_headers/fd_group.o 00:04:00.662 CC test/nvme/connect_stress/connect_stress.o 00:04:00.662 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:00.662 CC examples/nvme/arbitration/arbitration.o 00:04:00.662 LINK dif 00:04:00.662 CXX test/cpp_headers/fd.o 00:04:00.662 LINK reserve 00:04:00.919 LINK connect_stress 00:04:00.919 LINK simple_copy 00:04:00.919 CXX test/cpp_headers/file.o 00:04:00.919 LINK reconnect 00:04:00.919 CC test/lvol/esnap/esnap.o 00:04:00.919 CXX test/cpp_headers/fsdev.o 00:04:00.919 CXX test/cpp_headers/fsdev_module.o 00:04:00.919 CXX test/cpp_headers/ftl.o 00:04:01.178 LINK arbitration 00:04:01.178 LINK bdevperf 00:04:01.178 CC test/nvme/boot_partition/boot_partition.o 00:04:01.178 CXX test/cpp_headers/fuse_dispatcher.o 00:04:01.178 CXX test/cpp_headers/gpt_spec.o 00:04:01.178 CXX test/cpp_headers/hexlify.o 00:04:01.178 CC examples/nvme/hotplug/hotplug.o 00:04:01.178 LINK nvme_manage 00:04:01.178 CC test/bdev/bdevio/bdevio.o 00:04:01.436 CXX test/cpp_headers/histogram_data.o 00:04:01.436 LINK boot_partition 00:04:01.436 CXX test/cpp_headers/idxd.o 00:04:01.436 CXX test/cpp_headers/idxd_spec.o 00:04:01.436 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:01.436 CC test/nvme/compliance/nvme_compliance.o 00:04:01.436 LINK hotplug 00:04:01.436 CXX test/cpp_headers/init.o 00:04:01.436 CXX test/cpp_headers/ioat.o 00:04:01.436 CC examples/nvme/abort/abort.o 00:04:01.436 CXX test/cpp_headers/ioat_spec.o 00:04:01.694 LINK cmb_copy 00:04:01.694 CC test/nvme/fused_ordering/fused_ordering.o 00:04:01.694 CXX test/cpp_headers/iscsi_spec.o 00:04:01.694 LINK bdevio 00:04:01.694 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:01.694 CXX test/cpp_headers/json.o 00:04:01.694 CC test/nvme/fdp/fdp.o 00:04:01.694 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:01.952 LINK fused_ordering 00:04:01.952 LINK nvme_compliance 00:04:01.952 CC test/nvme/cuse/cuse.o 00:04:01.952 CXX test/cpp_headers/jsonrpc.o 00:04:01.952 LINK pmr_persistence 00:04:01.952 CXX test/cpp_headers/keyring.o 00:04:01.952 LINK abort 00:04:01.952 LINK doorbell_aers 00:04:01.952 CXX test/cpp_headers/keyring_module.o 00:04:01.952 CXX test/cpp_headers/likely.o 00:04:02.211 CXX test/cpp_headers/log.o 00:04:02.211 LINK fdp 00:04:02.211 CXX test/cpp_headers/lvol.o 00:04:02.211 CXX test/cpp_headers/md5.o 00:04:02.211 CXX test/cpp_headers/memory.o 00:04:02.211 CXX test/cpp_headers/mmio.o 00:04:02.211 CXX test/cpp_headers/nbd.o 00:04:02.211 CXX test/cpp_headers/net.o 00:04:02.211 CXX test/cpp_headers/notify.o 00:04:02.211 CXX test/cpp_headers/nvme_intel.o 00:04:02.211 CXX test/cpp_headers/nvme.o 00:04:02.473 CXX test/cpp_headers/nvme_ocssd.o 00:04:02.473 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:02.473 CC examples/nvmf/nvmf/nvmf.o 00:04:02.473 CXX test/cpp_headers/nvme_spec.o 00:04:02.473 CXX test/cpp_headers/nvme_zns.o 00:04:02.473 CXX test/cpp_headers/nvmf_cmd.o 00:04:02.473 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:02.473 CXX test/cpp_headers/nvmf.o 00:04:02.473 CXX test/cpp_headers/nvmf_spec.o 00:04:02.732 CXX test/cpp_headers/nvmf_transport.o 00:04:02.732 CXX test/cpp_headers/opal.o 00:04:02.732 CXX test/cpp_headers/opal_spec.o 00:04:02.732 CXX test/cpp_headers/pci_ids.o 00:04:02.732 CXX test/cpp_headers/pipe.o 00:04:02.732 CXX test/cpp_headers/queue.o 00:04:02.732 CXX test/cpp_headers/reduce.o 00:04:02.732 LINK nvmf 00:04:02.732 CXX test/cpp_headers/rpc.o 00:04:02.732 CXX test/cpp_headers/scheduler.o 00:04:02.732 CXX test/cpp_headers/scsi.o 00:04:02.732 CXX test/cpp_headers/scsi_spec.o 00:04:02.732 CXX test/cpp_headers/sock.o 00:04:02.991 CXX test/cpp_headers/stdinc.o 00:04:02.991 CXX test/cpp_headers/string.o 00:04:02.991 CXX test/cpp_headers/thread.o 00:04:02.991 CXX test/cpp_headers/trace.o 00:04:02.991 CXX test/cpp_headers/trace_parser.o 00:04:02.991 CXX test/cpp_headers/tree.o 00:04:02.991 CXX test/cpp_headers/ublk.o 00:04:02.991 CXX test/cpp_headers/util.o 00:04:02.991 CXX test/cpp_headers/uuid.o 00:04:02.991 CXX test/cpp_headers/version.o 00:04:02.991 CXX test/cpp_headers/vfio_user_pci.o 00:04:02.991 CXX test/cpp_headers/vfio_user_spec.o 00:04:02.991 CXX test/cpp_headers/vhost.o 00:04:03.249 CXX test/cpp_headers/vmd.o 00:04:03.250 CXX test/cpp_headers/xor.o 00:04:03.250 CXX test/cpp_headers/zipf.o 00:04:03.250 LINK cuse 00:04:07.443 LINK esnap 00:04:07.703 00:04:07.703 real 1m33.851s 00:04:07.703 user 8m15.489s 00:04:07.703 sys 1m43.018s 00:04:07.703 03:06:50 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:04:07.703 03:06:50 make -- common/autotest_common.sh@10 -- $ set +x 00:04:07.703 ************************************ 00:04:07.703 END TEST make 00:04:07.703 ************************************ 00:04:07.703 03:06:50 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:07.703 03:06:50 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:07.703 03:06:50 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:07.703 03:06:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.703 03:06:50 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:07.703 03:06:50 -- pm/common@44 -- $ pid=5451 00:04:07.703 03:06:50 -- pm/common@50 -- $ kill -TERM 5451 00:04:07.703 03:06:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.703 03:06:50 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:07.703 03:06:50 -- pm/common@44 -- $ pid=5453 00:04:07.703 03:06:50 -- pm/common@50 -- $ kill -TERM 5453 00:04:07.963 03:06:51 -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:07.963 03:06:51 -- common/autotest_common.sh@1681 -- # lcov --version 00:04:07.963 03:06:51 -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:07.963 03:06:51 -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:07.963 03:06:51 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.963 03:06:51 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.963 03:06:51 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.963 03:06:51 -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.963 03:06:51 -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.963 03:06:51 -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.963 03:06:51 -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.963 03:06:51 -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.963 03:06:51 -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.963 03:06:51 -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.963 03:06:51 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.963 03:06:51 -- scripts/common.sh@344 -- # case "$op" in 00:04:07.963 03:06:51 -- scripts/common.sh@345 -- # : 1 00:04:07.963 03:06:51 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.963 03:06:51 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.963 03:06:51 -- scripts/common.sh@365 -- # decimal 1 00:04:07.963 03:06:51 -- scripts/common.sh@353 -- # local d=1 00:04:07.963 03:06:51 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.963 03:06:51 -- scripts/common.sh@355 -- # echo 1 00:04:07.963 03:06:51 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.963 03:06:51 -- scripts/common.sh@366 -- # decimal 2 00:04:07.963 03:06:51 -- scripts/common.sh@353 -- # local d=2 00:04:07.963 03:06:51 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.963 03:06:51 -- scripts/common.sh@355 -- # echo 2 00:04:07.963 03:06:51 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.963 03:06:51 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.963 03:06:51 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.963 03:06:51 -- scripts/common.sh@368 -- # return 0 00:04:07.963 03:06:51 -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.963 03:06:51 -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:07.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.963 --rc genhtml_branch_coverage=1 00:04:07.963 --rc genhtml_function_coverage=1 00:04:07.963 --rc genhtml_legend=1 00:04:07.963 --rc geninfo_all_blocks=1 00:04:07.963 --rc geninfo_unexecuted_blocks=1 00:04:07.963 00:04:07.963 ' 00:04:07.963 03:06:51 -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:07.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.963 --rc genhtml_branch_coverage=1 00:04:07.963 --rc genhtml_function_coverage=1 00:04:07.963 --rc genhtml_legend=1 00:04:07.963 --rc geninfo_all_blocks=1 00:04:07.963 --rc geninfo_unexecuted_blocks=1 00:04:07.963 00:04:07.963 ' 00:04:07.963 03:06:51 -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:07.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.963 --rc genhtml_branch_coverage=1 00:04:07.963 --rc genhtml_function_coverage=1 00:04:07.963 --rc genhtml_legend=1 00:04:07.963 --rc geninfo_all_blocks=1 00:04:07.963 --rc geninfo_unexecuted_blocks=1 00:04:07.963 00:04:07.963 ' 00:04:07.963 03:06:51 -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:07.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.963 --rc genhtml_branch_coverage=1 00:04:07.963 --rc genhtml_function_coverage=1 00:04:07.963 --rc genhtml_legend=1 00:04:07.963 --rc geninfo_all_blocks=1 00:04:07.963 --rc geninfo_unexecuted_blocks=1 00:04:07.963 00:04:07.963 ' 00:04:07.963 03:06:51 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:07.963 03:06:51 -- nvmf/common.sh@7 -- # uname -s 00:04:07.963 03:06:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:07.963 03:06:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:07.963 03:06:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:07.963 03:06:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:07.963 03:06:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:07.963 03:06:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:07.963 03:06:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:07.963 03:06:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:07.963 03:06:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:07.963 03:06:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:07.963 03:06:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ebf1727-052a-45f1-8522-0162d29da5c7 00:04:07.963 03:06:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=9ebf1727-052a-45f1-8522-0162d29da5c7 00:04:07.963 03:06:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:07.963 03:06:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:07.963 03:06:51 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:07.963 03:06:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:07.964 03:06:51 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:07.964 03:06:51 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:07.964 03:06:51 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:07.964 03:06:51 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:07.964 03:06:51 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:07.964 03:06:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.964 03:06:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.964 03:06:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.964 03:06:51 -- paths/export.sh@5 -- # export PATH 00:04:07.964 03:06:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.964 03:06:51 -- nvmf/common.sh@51 -- # : 0 00:04:07.964 03:06:51 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:07.964 03:06:51 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:07.964 03:06:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:07.964 03:06:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:07.964 03:06:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:07.964 03:06:51 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:07.964 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:07.964 03:06:51 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:07.964 03:06:51 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:07.964 03:06:51 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:07.964 03:06:51 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:08.224 03:06:51 -- spdk/autotest.sh@32 -- # uname -s 00:04:08.224 03:06:51 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:08.224 03:06:51 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:08.224 03:06:51 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:08.224 03:06:51 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:08.224 03:06:51 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:08.224 03:06:51 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:08.224 03:06:51 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:08.224 03:06:51 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:08.224 03:06:51 -- spdk/autotest.sh@48 -- # udevadm_pid=54521 00:04:08.224 03:06:51 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:08.224 03:06:51 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:08.224 03:06:51 -- pm/common@17 -- # local monitor 00:04:08.224 03:06:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.224 03:06:51 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:08.224 03:06:51 -- pm/common@25 -- # sleep 1 00:04:08.224 03:06:51 -- pm/common@21 -- # date +%s 00:04:08.224 03:06:51 -- pm/common@21 -- # date +%s 00:04:08.224 03:06:51 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728443211 00:04:08.224 03:06:51 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1728443211 00:04:08.224 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728443211_collect-vmstat.pm.log 00:04:08.224 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1728443211_collect-cpu-load.pm.log 00:04:09.163 03:06:52 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:09.163 03:06:52 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:09.163 03:06:52 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:09.163 03:06:52 -- common/autotest_common.sh@10 -- # set +x 00:04:09.163 03:06:52 -- spdk/autotest.sh@59 -- # create_test_list 00:04:09.163 03:06:52 -- common/autotest_common.sh@748 -- # xtrace_disable 00:04:09.163 03:06:52 -- common/autotest_common.sh@10 -- # set +x 00:04:09.163 03:06:52 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:09.163 03:06:52 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:09.163 03:06:52 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:09.163 03:06:52 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:09.163 03:06:52 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:09.163 03:06:52 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:09.163 03:06:52 -- common/autotest_common.sh@1455 -- # uname 00:04:09.163 03:06:52 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:04:09.163 03:06:52 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:09.163 03:06:52 -- common/autotest_common.sh@1475 -- # uname 00:04:09.163 03:06:52 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:04:09.163 03:06:52 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:09.163 03:06:52 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:09.423 lcov: LCOV version 1.15 00:04:09.423 03:06:52 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:04:24.359 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:24.359 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:04:39.263 03:07:20 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:39.263 03:07:20 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:39.263 03:07:20 -- common/autotest_common.sh@10 -- # set +x 00:04:39.263 03:07:20 -- spdk/autotest.sh@78 -- # rm -f 00:04:39.263 03:07:20 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:39.263 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:39.263 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:04:39.263 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:04:39.263 03:07:21 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:39.263 03:07:21 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:04:39.263 03:07:21 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:04:39.263 03:07:21 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:04:39.263 03:07:21 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:39.263 03:07:21 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:04:39.263 03:07:21 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:04:39.263 03:07:21 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:39.263 03:07:21 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:39.263 03:07:21 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:39.263 03:07:21 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:04:39.263 03:07:21 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:04:39.263 03:07:21 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:04:39.263 03:07:21 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:39.263 03:07:21 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:39.263 03:07:21 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:04:39.263 03:07:21 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:04:39.263 03:07:21 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:04:39.263 03:07:21 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:39.263 03:07:21 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:04:39.263 03:07:21 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:04:39.263 03:07:21 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:04:39.263 03:07:21 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:04:39.263 03:07:21 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:04:39.263 03:07:21 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:39.263 03:07:21 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:39.263 03:07:21 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:39.263 03:07:21 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:39.263 03:07:21 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:39.263 03:07:21 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:39.263 No valid GPT data, bailing 00:04:39.263 03:07:21 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:39.263 03:07:21 -- scripts/common.sh@394 -- # pt= 00:04:39.263 03:07:21 -- scripts/common.sh@395 -- # return 1 00:04:39.263 03:07:21 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:39.263 1+0 records in 00:04:39.263 1+0 records out 00:04:39.263 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00620522 s, 169 MB/s 00:04:39.263 03:07:21 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:39.263 03:07:21 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:39.263 03:07:21 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:04:39.263 03:07:21 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:04:39.263 03:07:21 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:04:39.263 No valid GPT data, bailing 00:04:39.263 03:07:21 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:04:39.263 03:07:21 -- scripts/common.sh@394 -- # pt= 00:04:39.263 03:07:21 -- scripts/common.sh@395 -- # return 1 00:04:39.263 03:07:21 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:04:39.263 1+0 records in 00:04:39.263 1+0 records out 00:04:39.263 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00650332 s, 161 MB/s 00:04:39.263 03:07:21 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:39.263 03:07:21 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:39.263 03:07:21 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:04:39.263 03:07:21 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:04:39.263 03:07:21 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:04:39.263 No valid GPT data, bailing 00:04:39.263 03:07:22 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:04:39.263 03:07:22 -- scripts/common.sh@394 -- # pt= 00:04:39.263 03:07:22 -- scripts/common.sh@395 -- # return 1 00:04:39.263 03:07:22 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:04:39.263 1+0 records in 00:04:39.263 1+0 records out 00:04:39.263 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00660438 s, 159 MB/s 00:04:39.263 03:07:22 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:39.263 03:07:22 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:39.263 03:07:22 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:04:39.263 03:07:22 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:04:39.263 03:07:22 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:04:39.263 No valid GPT data, bailing 00:04:39.263 03:07:22 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:04:39.263 03:07:22 -- scripts/common.sh@394 -- # pt= 00:04:39.263 03:07:22 -- scripts/common.sh@395 -- # return 1 00:04:39.263 03:07:22 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:04:39.263 1+0 records in 00:04:39.263 1+0 records out 00:04:39.263 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00635283 s, 165 MB/s 00:04:39.263 03:07:22 -- spdk/autotest.sh@105 -- # sync 00:04:39.263 03:07:22 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:39.263 03:07:22 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:39.263 03:07:22 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:41.797 03:07:24 -- spdk/autotest.sh@111 -- # uname -s 00:04:41.797 03:07:24 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:41.797 03:07:24 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:41.797 03:07:24 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:04:42.732 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:42.732 Hugepages 00:04:42.732 node hugesize free / total 00:04:42.732 node0 1048576kB 0 / 0 00:04:42.732 node0 2048kB 0 / 0 00:04:42.732 00:04:42.732 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:42.732 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:04:42.732 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:04:42.990 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:04:42.990 03:07:26 -- spdk/autotest.sh@117 -- # uname -s 00:04:42.990 03:07:26 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:42.990 03:07:26 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:42.990 03:07:26 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:43.556 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:43.814 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:43.814 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:43.814 03:07:27 -- common/autotest_common.sh@1515 -- # sleep 1 00:04:45.195 03:07:28 -- common/autotest_common.sh@1516 -- # bdfs=() 00:04:45.195 03:07:28 -- common/autotest_common.sh@1516 -- # local bdfs 00:04:45.195 03:07:28 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:04:45.195 03:07:28 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:04:45.195 03:07:28 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:45.195 03:07:28 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:45.195 03:07:28 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:45.195 03:07:28 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:45.195 03:07:28 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:45.195 03:07:28 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:45.195 03:07:28 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:45.195 03:07:28 -- common/autotest_common.sh@1520 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:45.467 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:45.467 Waiting for block devices as requested 00:04:45.467 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:45.726 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:45.726 03:07:28 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:45.726 03:07:28 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:45.726 03:07:28 -- common/autotest_common.sh@1485 -- # grep 0000:00:10.0/nvme/nvme 00:04:45.726 03:07:28 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:45.726 03:07:28 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:45.726 03:07:28 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:45.726 03:07:28 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:45.726 03:07:28 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme1 00:04:45.726 03:07:28 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme1 00:04:45.726 03:07:28 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme1 ]] 00:04:45.726 03:07:28 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme1 00:04:45.726 03:07:28 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:45.726 03:07:28 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:45.726 03:07:28 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:45.726 03:07:28 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:45.726 03:07:28 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:45.726 03:07:28 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:45.726 03:07:28 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme1 00:04:45.726 03:07:28 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:45.726 03:07:28 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:45.726 03:07:28 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:45.726 03:07:28 -- common/autotest_common.sh@1541 -- # continue 00:04:45.726 03:07:28 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:04:45.726 03:07:28 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:45.726 03:07:28 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:04:45.726 03:07:28 -- common/autotest_common.sh@1485 -- # grep 0000:00:11.0/nvme/nvme 00:04:45.726 03:07:28 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:45.726 03:07:28 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:45.726 03:07:28 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:45.726 03:07:29 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:04:45.726 03:07:29 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:04:45.726 03:07:29 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:04:45.726 03:07:29 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:04:45.726 03:07:29 -- common/autotest_common.sh@1529 -- # grep oacs 00:04:45.726 03:07:29 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:04:45.726 03:07:29 -- common/autotest_common.sh@1529 -- # oacs=' 0x12a' 00:04:45.726 03:07:29 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:04:45.726 03:07:29 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:04:45.726 03:07:29 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:04:45.726 03:07:29 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:04:45.726 03:07:29 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:04:45.984 03:07:29 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:04:45.984 03:07:29 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:04:45.984 03:07:29 -- common/autotest_common.sh@1541 -- # continue 00:04:45.984 03:07:29 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:45.984 03:07:29 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:45.984 03:07:29 -- common/autotest_common.sh@10 -- # set +x 00:04:45.984 03:07:29 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:45.984 03:07:29 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:45.984 03:07:29 -- common/autotest_common.sh@10 -- # set +x 00:04:45.984 03:07:29 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:46.921 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:46.921 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:46.921 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:46.921 03:07:30 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:46.921 03:07:30 -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:46.921 03:07:30 -- common/autotest_common.sh@10 -- # set +x 00:04:46.921 03:07:30 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:46.921 03:07:30 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:04:46.921 03:07:30 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:04:46.921 03:07:30 -- common/autotest_common.sh@1561 -- # bdfs=() 00:04:46.921 03:07:30 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:04:46.921 03:07:30 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:04:46.921 03:07:30 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:04:46.921 03:07:30 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:04:46.921 03:07:30 -- common/autotest_common.sh@1496 -- # bdfs=() 00:04:46.921 03:07:30 -- common/autotest_common.sh@1496 -- # local bdfs 00:04:46.921 03:07:30 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:46.921 03:07:30 -- common/autotest_common.sh@1497 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:46.921 03:07:30 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:04:47.180 03:07:30 -- common/autotest_common.sh@1498 -- # (( 2 == 0 )) 00:04:47.180 03:07:30 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:04:47.180 03:07:30 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:47.180 03:07:30 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:47.180 03:07:30 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:47.180 03:07:30 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:47.180 03:07:30 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:04:47.180 03:07:30 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:47.180 03:07:30 -- common/autotest_common.sh@1564 -- # device=0x0010 00:04:47.180 03:07:30 -- common/autotest_common.sh@1565 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:47.180 03:07:30 -- common/autotest_common.sh@1570 -- # (( 0 > 0 )) 00:04:47.180 03:07:30 -- common/autotest_common.sh@1570 -- # return 0 00:04:47.180 03:07:30 -- common/autotest_common.sh@1577 -- # [[ -z '' ]] 00:04:47.180 03:07:30 -- common/autotest_common.sh@1578 -- # return 0 00:04:47.180 03:07:30 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:47.180 03:07:30 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:47.180 03:07:30 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:47.180 03:07:30 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:47.180 03:07:30 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:47.180 03:07:30 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:47.180 03:07:30 -- common/autotest_common.sh@10 -- # set +x 00:04:47.180 03:07:30 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:47.180 03:07:30 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:47.180 03:07:30 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:47.180 03:07:30 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:47.180 03:07:30 -- common/autotest_common.sh@10 -- # set +x 00:04:47.180 ************************************ 00:04:47.180 START TEST env 00:04:47.180 ************************************ 00:04:47.180 03:07:30 env -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:47.180 * Looking for test storage... 00:04:47.180 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:47.180 03:07:30 env -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:47.180 03:07:30 env -- common/autotest_common.sh@1681 -- # lcov --version 00:04:47.180 03:07:30 env -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:47.439 03:07:30 env -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:47.439 03:07:30 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.439 03:07:30 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.439 03:07:30 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.439 03:07:30 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.439 03:07:30 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.439 03:07:30 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.439 03:07:30 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.439 03:07:30 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.439 03:07:30 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.439 03:07:30 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.439 03:07:30 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.439 03:07:30 env -- scripts/common.sh@344 -- # case "$op" in 00:04:47.439 03:07:30 env -- scripts/common.sh@345 -- # : 1 00:04:47.439 03:07:30 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.439 03:07:30 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.439 03:07:30 env -- scripts/common.sh@365 -- # decimal 1 00:04:47.439 03:07:30 env -- scripts/common.sh@353 -- # local d=1 00:04:47.439 03:07:30 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.439 03:07:30 env -- scripts/common.sh@355 -- # echo 1 00:04:47.439 03:07:30 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.439 03:07:30 env -- scripts/common.sh@366 -- # decimal 2 00:04:47.439 03:07:30 env -- scripts/common.sh@353 -- # local d=2 00:04:47.439 03:07:30 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.439 03:07:30 env -- scripts/common.sh@355 -- # echo 2 00:04:47.439 03:07:30 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.439 03:07:30 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.439 03:07:30 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.439 03:07:30 env -- scripts/common.sh@368 -- # return 0 00:04:47.439 03:07:30 env -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.439 03:07:30 env -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:47.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.439 --rc genhtml_branch_coverage=1 00:04:47.439 --rc genhtml_function_coverage=1 00:04:47.439 --rc genhtml_legend=1 00:04:47.439 --rc geninfo_all_blocks=1 00:04:47.439 --rc geninfo_unexecuted_blocks=1 00:04:47.439 00:04:47.439 ' 00:04:47.439 03:07:30 env -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:47.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.439 --rc genhtml_branch_coverage=1 00:04:47.439 --rc genhtml_function_coverage=1 00:04:47.439 --rc genhtml_legend=1 00:04:47.439 --rc geninfo_all_blocks=1 00:04:47.439 --rc geninfo_unexecuted_blocks=1 00:04:47.439 00:04:47.439 ' 00:04:47.439 03:07:30 env -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:47.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.439 --rc genhtml_branch_coverage=1 00:04:47.439 --rc genhtml_function_coverage=1 00:04:47.439 --rc genhtml_legend=1 00:04:47.439 --rc geninfo_all_blocks=1 00:04:47.439 --rc geninfo_unexecuted_blocks=1 00:04:47.439 00:04:47.439 ' 00:04:47.439 03:07:30 env -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:47.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.439 --rc genhtml_branch_coverage=1 00:04:47.439 --rc genhtml_function_coverage=1 00:04:47.439 --rc genhtml_legend=1 00:04:47.439 --rc geninfo_all_blocks=1 00:04:47.439 --rc geninfo_unexecuted_blocks=1 00:04:47.439 00:04:47.439 ' 00:04:47.439 03:07:30 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:47.439 03:07:30 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:47.439 03:07:30 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:47.439 03:07:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:47.439 ************************************ 00:04:47.439 START TEST env_memory 00:04:47.439 ************************************ 00:04:47.439 03:07:30 env.env_memory -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:47.439 00:04:47.439 00:04:47.439 CUnit - A unit testing framework for C - Version 2.1-3 00:04:47.439 http://cunit.sourceforge.net/ 00:04:47.439 00:04:47.439 00:04:47.439 Suite: memory 00:04:47.440 Test: alloc and free memory map ...[2024-10-09 03:07:30.585464] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:47.440 passed 00:04:47.440 Test: mem map translation ...[2024-10-09 03:07:30.629421] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:47.440 [2024-10-09 03:07:30.629488] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:47.440 [2024-10-09 03:07:30.629560] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:47.440 [2024-10-09 03:07:30.629583] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:47.440 passed 00:04:47.440 Test: mem map registration ...[2024-10-09 03:07:30.699222] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:47.440 [2024-10-09 03:07:30.699284] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:47.440 passed 00:04:47.698 Test: mem map adjacent registrations ...passed 00:04:47.698 00:04:47.698 Run Summary: Type Total Ran Passed Failed Inactive 00:04:47.698 suites 1 1 n/a 0 0 00:04:47.698 tests 4 4 4 0 0 00:04:47.698 asserts 152 152 152 0 n/a 00:04:47.698 00:04:47.698 Elapsed time = 0.247 seconds 00:04:47.698 00:04:47.698 real 0m0.292s 00:04:47.698 user 0m0.261s 00:04:47.698 sys 0m0.023s 00:04:47.698 03:07:30 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:47.698 03:07:30 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:47.698 ************************************ 00:04:47.698 END TEST env_memory 00:04:47.698 ************************************ 00:04:47.698 03:07:30 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:47.698 03:07:30 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:47.698 03:07:30 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:47.699 03:07:30 env -- common/autotest_common.sh@10 -- # set +x 00:04:47.699 ************************************ 00:04:47.699 START TEST env_vtophys 00:04:47.699 ************************************ 00:04:47.699 03:07:30 env.env_vtophys -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:47.699 EAL: lib.eal log level changed from notice to debug 00:04:47.699 EAL: Detected lcore 0 as core 0 on socket 0 00:04:47.699 EAL: Detected lcore 1 as core 0 on socket 0 00:04:47.699 EAL: Detected lcore 2 as core 0 on socket 0 00:04:47.699 EAL: Detected lcore 3 as core 0 on socket 0 00:04:47.699 EAL: Detected lcore 4 as core 0 on socket 0 00:04:47.699 EAL: Detected lcore 5 as core 0 on socket 0 00:04:47.699 EAL: Detected lcore 6 as core 0 on socket 0 00:04:47.699 EAL: Detected lcore 7 as core 0 on socket 0 00:04:47.699 EAL: Detected lcore 8 as core 0 on socket 0 00:04:47.699 EAL: Detected lcore 9 as core 0 on socket 0 00:04:47.699 EAL: Maximum logical cores by configuration: 128 00:04:47.699 EAL: Detected CPU lcores: 10 00:04:47.699 EAL: Detected NUMA nodes: 1 00:04:47.699 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:47.699 EAL: Detected shared linkage of DPDK 00:04:47.699 EAL: No shared files mode enabled, IPC will be disabled 00:04:47.699 EAL: Selected IOVA mode 'PA' 00:04:47.699 EAL: Probing VFIO support... 00:04:47.699 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:47.699 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:47.699 EAL: Ask a virtual area of 0x2e000 bytes 00:04:47.699 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:47.699 EAL: Setting up physically contiguous memory... 00:04:47.699 EAL: Setting maximum number of open files to 524288 00:04:47.699 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:47.699 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:47.699 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.699 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:47.699 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:47.699 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.699 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:47.699 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:47.699 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.699 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:47.699 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:47.699 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.699 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:47.699 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:47.699 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.699 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:47.699 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:47.699 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.699 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:47.699 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:47.699 EAL: Ask a virtual area of 0x61000 bytes 00:04:47.699 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:47.699 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:47.699 EAL: Ask a virtual area of 0x400000000 bytes 00:04:47.699 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:47.699 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:47.699 EAL: Hugepages will be freed exactly as allocated. 00:04:47.699 EAL: No shared files mode enabled, IPC is disabled 00:04:47.699 EAL: No shared files mode enabled, IPC is disabled 00:04:47.957 EAL: TSC frequency is ~2290000 KHz 00:04:47.957 EAL: Main lcore 0 is ready (tid=7fd16aa1aa40;cpuset=[0]) 00:04:47.957 EAL: Trying to obtain current memory policy. 00:04:47.957 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:47.957 EAL: Restoring previous memory policy: 0 00:04:47.957 EAL: request: mp_malloc_sync 00:04:47.957 EAL: No shared files mode enabled, IPC is disabled 00:04:47.957 EAL: Heap on socket 0 was expanded by 2MB 00:04:47.957 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:47.957 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:47.957 EAL: Mem event callback 'spdk:(nil)' registered 00:04:47.957 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:47.957 00:04:47.957 00:04:47.957 CUnit - A unit testing framework for C - Version 2.1-3 00:04:47.957 http://cunit.sourceforge.net/ 00:04:47.957 00:04:47.957 00:04:47.957 Suite: components_suite 00:04:48.215 Test: vtophys_malloc_test ...passed 00:04:48.215 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:48.215 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.215 EAL: Restoring previous memory policy: 4 00:04:48.215 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.215 EAL: request: mp_malloc_sync 00:04:48.215 EAL: No shared files mode enabled, IPC is disabled 00:04:48.215 EAL: Heap on socket 0 was expanded by 4MB 00:04:48.215 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.215 EAL: request: mp_malloc_sync 00:04:48.215 EAL: No shared files mode enabled, IPC is disabled 00:04:48.215 EAL: Heap on socket 0 was shrunk by 4MB 00:04:48.215 EAL: Trying to obtain current memory policy. 00:04:48.215 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.215 EAL: Restoring previous memory policy: 4 00:04:48.215 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.215 EAL: request: mp_malloc_sync 00:04:48.215 EAL: No shared files mode enabled, IPC is disabled 00:04:48.215 EAL: Heap on socket 0 was expanded by 6MB 00:04:48.215 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.215 EAL: request: mp_malloc_sync 00:04:48.215 EAL: No shared files mode enabled, IPC is disabled 00:04:48.215 EAL: Heap on socket 0 was shrunk by 6MB 00:04:48.215 EAL: Trying to obtain current memory policy. 00:04:48.215 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.215 EAL: Restoring previous memory policy: 4 00:04:48.215 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.215 EAL: request: mp_malloc_sync 00:04:48.215 EAL: No shared files mode enabled, IPC is disabled 00:04:48.215 EAL: Heap on socket 0 was expanded by 10MB 00:04:48.215 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.215 EAL: request: mp_malloc_sync 00:04:48.215 EAL: No shared files mode enabled, IPC is disabled 00:04:48.215 EAL: Heap on socket 0 was shrunk by 10MB 00:04:48.215 EAL: Trying to obtain current memory policy. 00:04:48.215 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.215 EAL: Restoring previous memory policy: 4 00:04:48.215 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.215 EAL: request: mp_malloc_sync 00:04:48.215 EAL: No shared files mode enabled, IPC is disabled 00:04:48.215 EAL: Heap on socket 0 was expanded by 18MB 00:04:48.474 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.474 EAL: request: mp_malloc_sync 00:04:48.474 EAL: No shared files mode enabled, IPC is disabled 00:04:48.474 EAL: Heap on socket 0 was shrunk by 18MB 00:04:48.474 EAL: Trying to obtain current memory policy. 00:04:48.475 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.475 EAL: Restoring previous memory policy: 4 00:04:48.475 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.475 EAL: request: mp_malloc_sync 00:04:48.475 EAL: No shared files mode enabled, IPC is disabled 00:04:48.475 EAL: Heap on socket 0 was expanded by 34MB 00:04:48.475 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.475 EAL: request: mp_malloc_sync 00:04:48.475 EAL: No shared files mode enabled, IPC is disabled 00:04:48.475 EAL: Heap on socket 0 was shrunk by 34MB 00:04:48.475 EAL: Trying to obtain current memory policy. 00:04:48.475 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.475 EAL: Restoring previous memory policy: 4 00:04:48.475 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.475 EAL: request: mp_malloc_sync 00:04:48.475 EAL: No shared files mode enabled, IPC is disabled 00:04:48.475 EAL: Heap on socket 0 was expanded by 66MB 00:04:48.735 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.735 EAL: request: mp_malloc_sync 00:04:48.735 EAL: No shared files mode enabled, IPC is disabled 00:04:48.735 EAL: Heap on socket 0 was shrunk by 66MB 00:04:48.735 EAL: Trying to obtain current memory policy. 00:04:48.735 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:48.735 EAL: Restoring previous memory policy: 4 00:04:48.735 EAL: Calling mem event callback 'spdk:(nil)' 00:04:48.735 EAL: request: mp_malloc_sync 00:04:48.735 EAL: No shared files mode enabled, IPC is disabled 00:04:48.735 EAL: Heap on socket 0 was expanded by 130MB 00:04:49.000 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.000 EAL: request: mp_malloc_sync 00:04:49.000 EAL: No shared files mode enabled, IPC is disabled 00:04:49.000 EAL: Heap on socket 0 was shrunk by 130MB 00:04:49.260 EAL: Trying to obtain current memory policy. 00:04:49.260 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:49.260 EAL: Restoring previous memory policy: 4 00:04:49.260 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.260 EAL: request: mp_malloc_sync 00:04:49.260 EAL: No shared files mode enabled, IPC is disabled 00:04:49.260 EAL: Heap on socket 0 was expanded by 258MB 00:04:49.829 EAL: Calling mem event callback 'spdk:(nil)' 00:04:49.829 EAL: request: mp_malloc_sync 00:04:49.829 EAL: No shared files mode enabled, IPC is disabled 00:04:49.829 EAL: Heap on socket 0 was shrunk by 258MB 00:04:50.397 EAL: Trying to obtain current memory policy. 00:04:50.397 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:50.397 EAL: Restoring previous memory policy: 4 00:04:50.397 EAL: Calling mem event callback 'spdk:(nil)' 00:04:50.397 EAL: request: mp_malloc_sync 00:04:50.397 EAL: No shared files mode enabled, IPC is disabled 00:04:50.397 EAL: Heap on socket 0 was expanded by 514MB 00:04:51.333 EAL: Calling mem event callback 'spdk:(nil)' 00:04:51.333 EAL: request: mp_malloc_sync 00:04:51.333 EAL: No shared files mode enabled, IPC is disabled 00:04:51.333 EAL: Heap on socket 0 was shrunk by 514MB 00:04:52.269 EAL: Trying to obtain current memory policy. 00:04:52.269 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:52.269 EAL: Restoring previous memory policy: 4 00:04:52.269 EAL: Calling mem event callback 'spdk:(nil)' 00:04:52.269 EAL: request: mp_malloc_sync 00:04:52.269 EAL: No shared files mode enabled, IPC is disabled 00:04:52.269 EAL: Heap on socket 0 was expanded by 1026MB 00:04:54.172 EAL: Calling mem event callback 'spdk:(nil)' 00:04:54.172 EAL: request: mp_malloc_sync 00:04:54.172 EAL: No shared files mode enabled, IPC is disabled 00:04:54.172 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:56.076 passed 00:04:56.076 00:04:56.076 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.076 suites 1 1 n/a 0 0 00:04:56.076 tests 2 2 2 0 0 00:04:56.076 asserts 5817 5817 5817 0 n/a 00:04:56.076 00:04:56.076 Elapsed time = 7.930 seconds 00:04:56.076 EAL: Calling mem event callback 'spdk:(nil)' 00:04:56.076 EAL: request: mp_malloc_sync 00:04:56.076 EAL: No shared files mode enabled, IPC is disabled 00:04:56.076 EAL: Heap on socket 0 was shrunk by 2MB 00:04:56.076 EAL: No shared files mode enabled, IPC is disabled 00:04:56.076 EAL: No shared files mode enabled, IPC is disabled 00:04:56.076 EAL: No shared files mode enabled, IPC is disabled 00:04:56.076 00:04:56.076 real 0m8.249s 00:04:56.076 user 0m7.297s 00:04:56.076 sys 0m0.798s 00:04:56.076 03:07:39 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.076 03:07:39 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:56.076 ************************************ 00:04:56.076 END TEST env_vtophys 00:04:56.076 ************************************ 00:04:56.076 03:07:39 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:56.076 03:07:39 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.076 03:07:39 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.076 03:07:39 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.076 ************************************ 00:04:56.076 START TEST env_pci 00:04:56.076 ************************************ 00:04:56.076 03:07:39 env.env_pci -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:56.076 00:04:56.076 00:04:56.076 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.076 http://cunit.sourceforge.net/ 00:04:56.076 00:04:56.076 00:04:56.076 Suite: pci 00:04:56.076 Test: pci_hook ...[2024-10-09 03:07:39.233936] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1049:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56798 has claimed it 00:04:56.076 passed 00:04:56.076 00:04:56.076 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.076 suites 1 1 n/a 0 0 00:04:56.076 tests 1 1 1 0 0 00:04:56.076 asserts 25 25 25 0 n/a 00:04:56.076 00:04:56.076 Elapsed time = 0.005 seconds 00:04:56.076 EAL: Cannot find device (10000:00:01.0) 00:04:56.076 EAL: Failed to attach device on primary process 00:04:56.076 00:04:56.076 real 0m0.090s 00:04:56.076 user 0m0.038s 00:04:56.076 sys 0m0.052s 00:04:56.076 03:07:39 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.076 03:07:39 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:56.076 ************************************ 00:04:56.076 END TEST env_pci 00:04:56.076 ************************************ 00:04:56.076 03:07:39 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:56.076 03:07:39 env -- env/env.sh@15 -- # uname 00:04:56.076 03:07:39 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:56.076 03:07:39 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:56.076 03:07:39 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:56.076 03:07:39 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:56.076 03:07:39 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.076 03:07:39 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.076 ************************************ 00:04:56.076 START TEST env_dpdk_post_init 00:04:56.076 ************************************ 00:04:56.076 03:07:39 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:56.335 EAL: Detected CPU lcores: 10 00:04:56.335 EAL: Detected NUMA nodes: 1 00:04:56.335 EAL: Detected shared linkage of DPDK 00:04:56.335 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:56.335 EAL: Selected IOVA mode 'PA' 00:04:56.335 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:56.336 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:56.336 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:56.336 Starting DPDK initialization... 00:04:56.336 Starting SPDK post initialization... 00:04:56.336 SPDK NVMe probe 00:04:56.336 Attaching to 0000:00:10.0 00:04:56.336 Attaching to 0000:00:11.0 00:04:56.336 Attached to 0000:00:10.0 00:04:56.336 Attached to 0000:00:11.0 00:04:56.336 Cleaning up... 00:04:56.336 00:04:56.336 real 0m0.272s 00:04:56.336 user 0m0.077s 00:04:56.336 sys 0m0.094s 00:04:56.336 03:07:39 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.336 03:07:39 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:56.336 ************************************ 00:04:56.336 END TEST env_dpdk_post_init 00:04:56.336 ************************************ 00:04:56.594 03:07:39 env -- env/env.sh@26 -- # uname 00:04:56.594 03:07:39 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:56.594 03:07:39 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:56.594 03:07:39 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.594 03:07:39 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.594 03:07:39 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.594 ************************************ 00:04:56.594 START TEST env_mem_callbacks 00:04:56.595 ************************************ 00:04:56.595 03:07:39 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:56.595 EAL: Detected CPU lcores: 10 00:04:56.595 EAL: Detected NUMA nodes: 1 00:04:56.595 EAL: Detected shared linkage of DPDK 00:04:56.595 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:56.595 EAL: Selected IOVA mode 'PA' 00:04:56.595 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:56.595 00:04:56.595 00:04:56.595 CUnit - A unit testing framework for C - Version 2.1-3 00:04:56.595 http://cunit.sourceforge.net/ 00:04:56.595 00:04:56.595 00:04:56.595 Suite: memory 00:04:56.595 Test: test ... 00:04:56.595 register 0x200000200000 2097152 00:04:56.595 malloc 3145728 00:04:56.595 register 0x200000400000 4194304 00:04:56.595 buf 0x2000004fffc0 len 3145728 PASSED 00:04:56.595 malloc 64 00:04:56.595 buf 0x2000004ffec0 len 64 PASSED 00:04:56.595 malloc 4194304 00:04:56.595 register 0x200000800000 6291456 00:04:56.595 buf 0x2000009fffc0 len 4194304 PASSED 00:04:56.595 free 0x2000004fffc0 3145728 00:04:56.595 free 0x2000004ffec0 64 00:04:56.595 unregister 0x200000400000 4194304 PASSED 00:04:56.853 free 0x2000009fffc0 4194304 00:04:56.853 unregister 0x200000800000 6291456 PASSED 00:04:56.853 malloc 8388608 00:04:56.853 register 0x200000400000 10485760 00:04:56.853 buf 0x2000005fffc0 len 8388608 PASSED 00:04:56.853 free 0x2000005fffc0 8388608 00:04:56.853 unregister 0x200000400000 10485760 PASSED 00:04:56.853 passed 00:04:56.853 00:04:56.853 Run Summary: Type Total Ran Passed Failed Inactive 00:04:56.853 suites 1 1 n/a 0 0 00:04:56.853 tests 1 1 1 0 0 00:04:56.853 asserts 15 15 15 0 n/a 00:04:56.853 00:04:56.853 Elapsed time = 0.084 seconds 00:04:56.853 00:04:56.853 real 0m0.280s 00:04:56.853 user 0m0.111s 00:04:56.853 sys 0m0.067s 00:04:56.853 03:07:39 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.853 03:07:39 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:56.853 ************************************ 00:04:56.853 END TEST env_mem_callbacks 00:04:56.853 ************************************ 00:04:56.853 00:04:56.853 real 0m9.716s 00:04:56.853 user 0m8.002s 00:04:56.853 sys 0m1.370s 00:04:56.853 03:07:40 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:56.853 03:07:40 env -- common/autotest_common.sh@10 -- # set +x 00:04:56.853 ************************************ 00:04:56.853 END TEST env 00:04:56.853 ************************************ 00:04:56.853 03:07:40 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:56.853 03:07:40 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:56.853 03:07:40 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:56.853 03:07:40 -- common/autotest_common.sh@10 -- # set +x 00:04:56.853 ************************************ 00:04:56.853 START TEST rpc 00:04:56.853 ************************************ 00:04:56.853 03:07:40 rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:57.112 * Looking for test storage... 00:04:57.112 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:57.112 03:07:40 rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:04:57.112 03:07:40 rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:04:57.112 03:07:40 rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:04:57.112 03:07:40 rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:04:57.112 03:07:40 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.112 03:07:40 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.112 03:07:40 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.112 03:07:40 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.112 03:07:40 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.112 03:07:40 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.112 03:07:40 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.112 03:07:40 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.112 03:07:40 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.112 03:07:40 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.112 03:07:40 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.112 03:07:40 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:57.112 03:07:40 rpc -- scripts/common.sh@345 -- # : 1 00:04:57.112 03:07:40 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.112 03:07:40 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.112 03:07:40 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:57.112 03:07:40 rpc -- scripts/common.sh@353 -- # local d=1 00:04:57.112 03:07:40 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.112 03:07:40 rpc -- scripts/common.sh@355 -- # echo 1 00:04:57.112 03:07:40 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.112 03:07:40 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:57.112 03:07:40 rpc -- scripts/common.sh@353 -- # local d=2 00:04:57.112 03:07:40 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.112 03:07:40 rpc -- scripts/common.sh@355 -- # echo 2 00:04:57.112 03:07:40 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.112 03:07:40 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.112 03:07:40 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.112 03:07:40 rpc -- scripts/common.sh@368 -- # return 0 00:04:57.112 03:07:40 rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.112 03:07:40 rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:04:57.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.112 --rc genhtml_branch_coverage=1 00:04:57.112 --rc genhtml_function_coverage=1 00:04:57.112 --rc genhtml_legend=1 00:04:57.112 --rc geninfo_all_blocks=1 00:04:57.112 --rc geninfo_unexecuted_blocks=1 00:04:57.112 00:04:57.112 ' 00:04:57.112 03:07:40 rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:04:57.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.112 --rc genhtml_branch_coverage=1 00:04:57.112 --rc genhtml_function_coverage=1 00:04:57.112 --rc genhtml_legend=1 00:04:57.112 --rc geninfo_all_blocks=1 00:04:57.112 --rc geninfo_unexecuted_blocks=1 00:04:57.112 00:04:57.112 ' 00:04:57.112 03:07:40 rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:04:57.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.112 --rc genhtml_branch_coverage=1 00:04:57.112 --rc genhtml_function_coverage=1 00:04:57.112 --rc genhtml_legend=1 00:04:57.112 --rc geninfo_all_blocks=1 00:04:57.112 --rc geninfo_unexecuted_blocks=1 00:04:57.112 00:04:57.112 ' 00:04:57.112 03:07:40 rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:04:57.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.112 --rc genhtml_branch_coverage=1 00:04:57.112 --rc genhtml_function_coverage=1 00:04:57.112 --rc genhtml_legend=1 00:04:57.112 --rc geninfo_all_blocks=1 00:04:57.112 --rc geninfo_unexecuted_blocks=1 00:04:57.112 00:04:57.112 ' 00:04:57.112 03:07:40 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56925 00:04:57.112 03:07:40 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:57.112 03:07:40 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.112 03:07:40 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56925 00:04:57.112 03:07:40 rpc -- common/autotest_common.sh@831 -- # '[' -z 56925 ']' 00:04:57.112 03:07:40 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.112 03:07:40 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:57.112 03:07:40 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.112 03:07:40 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:57.112 03:07:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:57.112 [2024-10-09 03:07:40.385454] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:04:57.112 [2024-10-09 03:07:40.385586] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56925 ] 00:04:57.371 [2024-10-09 03:07:40.547984] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.630 [2024-10-09 03:07:40.755641] app.c: 610:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:57.630 [2024-10-09 03:07:40.755703] app.c: 611:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56925' to capture a snapshot of events at runtime. 00:04:57.630 [2024-10-09 03:07:40.755713] app.c: 616:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:57.630 [2024-10-09 03:07:40.755722] app.c: 617:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:57.630 [2024-10-09 03:07:40.755729] app.c: 618:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56925 for offline analysis/debug. 00:04:57.630 [2024-10-09 03:07:40.756986] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.568 03:07:41 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:58.568 03:07:41 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:58.568 03:07:41 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:58.568 03:07:41 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:58.568 03:07:41 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:58.568 03:07:41 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:58.568 03:07:41 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.568 03:07:41 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.568 03:07:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.568 ************************************ 00:04:58.568 START TEST rpc_integrity 00:04:58.568 ************************************ 00:04:58.568 03:07:41 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:58.568 03:07:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:58.568 03:07:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.568 03:07:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.568 03:07:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.568 03:07:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:58.568 03:07:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:58.568 03:07:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:58.568 03:07:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:58.568 03:07:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.568 03:07:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.568 03:07:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.568 03:07:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:58.568 03:07:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:58.568 03:07:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.568 03:07:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.568 03:07:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.568 03:07:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:58.568 { 00:04:58.568 "name": "Malloc0", 00:04:58.568 "aliases": [ 00:04:58.568 "641177c5-ebec-41f9-ae67-e7b298bb22f8" 00:04:58.568 ], 00:04:58.568 "product_name": "Malloc disk", 00:04:58.568 "block_size": 512, 00:04:58.568 "num_blocks": 16384, 00:04:58.568 "uuid": "641177c5-ebec-41f9-ae67-e7b298bb22f8", 00:04:58.568 "assigned_rate_limits": { 00:04:58.568 "rw_ios_per_sec": 0, 00:04:58.568 "rw_mbytes_per_sec": 0, 00:04:58.568 "r_mbytes_per_sec": 0, 00:04:58.568 "w_mbytes_per_sec": 0 00:04:58.568 }, 00:04:58.568 "claimed": false, 00:04:58.568 "zoned": false, 00:04:58.568 "supported_io_types": { 00:04:58.568 "read": true, 00:04:58.568 "write": true, 00:04:58.568 "unmap": true, 00:04:58.568 "flush": true, 00:04:58.568 "reset": true, 00:04:58.568 "nvme_admin": false, 00:04:58.568 "nvme_io": false, 00:04:58.568 "nvme_io_md": false, 00:04:58.568 "write_zeroes": true, 00:04:58.568 "zcopy": true, 00:04:58.568 "get_zone_info": false, 00:04:58.568 "zone_management": false, 00:04:58.568 "zone_append": false, 00:04:58.568 "compare": false, 00:04:58.568 "compare_and_write": false, 00:04:58.568 "abort": true, 00:04:58.568 "seek_hole": false, 00:04:58.568 "seek_data": false, 00:04:58.568 "copy": true, 00:04:58.568 "nvme_iov_md": false 00:04:58.568 }, 00:04:58.568 "memory_domains": [ 00:04:58.568 { 00:04:58.568 "dma_device_id": "system", 00:04:58.568 "dma_device_type": 1 00:04:58.568 }, 00:04:58.568 { 00:04:58.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.568 "dma_device_type": 2 00:04:58.568 } 00:04:58.568 ], 00:04:58.568 "driver_specific": {} 00:04:58.568 } 00:04:58.568 ]' 00:04:58.568 03:07:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:58.568 03:07:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:58.568 03:07:41 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:58.568 03:07:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.568 03:07:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.568 [2024-10-09 03:07:41.807606] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:58.568 [2024-10-09 03:07:41.807685] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:58.568 [2024-10-09 03:07:41.807730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:04:58.568 [2024-10-09 03:07:41.807750] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:58.568 [2024-10-09 03:07:41.810004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:58.568 [2024-10-09 03:07:41.810047] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:58.568 Passthru0 00:04:58.568 03:07:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.568 03:07:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:58.568 03:07:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.568 03:07:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.568 03:07:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.568 03:07:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:58.568 { 00:04:58.568 "name": "Malloc0", 00:04:58.568 "aliases": [ 00:04:58.568 "641177c5-ebec-41f9-ae67-e7b298bb22f8" 00:04:58.568 ], 00:04:58.568 "product_name": "Malloc disk", 00:04:58.568 "block_size": 512, 00:04:58.568 "num_blocks": 16384, 00:04:58.568 "uuid": "641177c5-ebec-41f9-ae67-e7b298bb22f8", 00:04:58.568 "assigned_rate_limits": { 00:04:58.568 "rw_ios_per_sec": 0, 00:04:58.568 "rw_mbytes_per_sec": 0, 00:04:58.568 "r_mbytes_per_sec": 0, 00:04:58.568 "w_mbytes_per_sec": 0 00:04:58.568 }, 00:04:58.568 "claimed": true, 00:04:58.568 "claim_type": "exclusive_write", 00:04:58.568 "zoned": false, 00:04:58.568 "supported_io_types": { 00:04:58.568 "read": true, 00:04:58.568 "write": true, 00:04:58.568 "unmap": true, 00:04:58.568 "flush": true, 00:04:58.568 "reset": true, 00:04:58.568 "nvme_admin": false, 00:04:58.568 "nvme_io": false, 00:04:58.568 "nvme_io_md": false, 00:04:58.568 "write_zeroes": true, 00:04:58.568 "zcopy": true, 00:04:58.568 "get_zone_info": false, 00:04:58.568 "zone_management": false, 00:04:58.568 "zone_append": false, 00:04:58.568 "compare": false, 00:04:58.568 "compare_and_write": false, 00:04:58.568 "abort": true, 00:04:58.568 "seek_hole": false, 00:04:58.568 "seek_data": false, 00:04:58.568 "copy": true, 00:04:58.568 "nvme_iov_md": false 00:04:58.568 }, 00:04:58.568 "memory_domains": [ 00:04:58.568 { 00:04:58.568 "dma_device_id": "system", 00:04:58.568 "dma_device_type": 1 00:04:58.568 }, 00:04:58.568 { 00:04:58.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.568 "dma_device_type": 2 00:04:58.568 } 00:04:58.568 ], 00:04:58.568 "driver_specific": {} 00:04:58.568 }, 00:04:58.568 { 00:04:58.568 "name": "Passthru0", 00:04:58.568 "aliases": [ 00:04:58.568 "fac19fb4-a519-5daa-b5c3-f5ad959cbf23" 00:04:58.568 ], 00:04:58.568 "product_name": "passthru", 00:04:58.568 "block_size": 512, 00:04:58.568 "num_blocks": 16384, 00:04:58.568 "uuid": "fac19fb4-a519-5daa-b5c3-f5ad959cbf23", 00:04:58.568 "assigned_rate_limits": { 00:04:58.568 "rw_ios_per_sec": 0, 00:04:58.568 "rw_mbytes_per_sec": 0, 00:04:58.568 "r_mbytes_per_sec": 0, 00:04:58.568 "w_mbytes_per_sec": 0 00:04:58.568 }, 00:04:58.568 "claimed": false, 00:04:58.568 "zoned": false, 00:04:58.568 "supported_io_types": { 00:04:58.568 "read": true, 00:04:58.568 "write": true, 00:04:58.568 "unmap": true, 00:04:58.568 "flush": true, 00:04:58.568 "reset": true, 00:04:58.568 "nvme_admin": false, 00:04:58.568 "nvme_io": false, 00:04:58.568 "nvme_io_md": false, 00:04:58.568 "write_zeroes": true, 00:04:58.568 "zcopy": true, 00:04:58.568 "get_zone_info": false, 00:04:58.568 "zone_management": false, 00:04:58.568 "zone_append": false, 00:04:58.568 "compare": false, 00:04:58.568 "compare_and_write": false, 00:04:58.568 "abort": true, 00:04:58.568 "seek_hole": false, 00:04:58.568 "seek_data": false, 00:04:58.568 "copy": true, 00:04:58.568 "nvme_iov_md": false 00:04:58.568 }, 00:04:58.568 "memory_domains": [ 00:04:58.568 { 00:04:58.568 "dma_device_id": "system", 00:04:58.568 "dma_device_type": 1 00:04:58.568 }, 00:04:58.568 { 00:04:58.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.568 "dma_device_type": 2 00:04:58.568 } 00:04:58.568 ], 00:04:58.568 "driver_specific": { 00:04:58.568 "passthru": { 00:04:58.568 "name": "Passthru0", 00:04:58.568 "base_bdev_name": "Malloc0" 00:04:58.568 } 00:04:58.568 } 00:04:58.568 } 00:04:58.568 ]' 00:04:58.568 03:07:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:58.835 03:07:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:58.835 03:07:41 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:58.835 03:07:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.835 03:07:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.835 03:07:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.835 03:07:41 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:58.835 03:07:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.835 03:07:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.835 03:07:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.835 03:07:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:58.835 03:07:41 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.835 03:07:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.835 03:07:41 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.835 03:07:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:58.835 03:07:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:58.835 03:07:42 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:58.835 00:04:58.835 real 0m0.358s 00:04:58.835 user 0m0.193s 00:04:58.835 sys 0m0.061s 00:04:58.835 03:07:42 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:58.835 03:07:42 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:58.835 ************************************ 00:04:58.835 END TEST rpc_integrity 00:04:58.835 ************************************ 00:04:58.835 03:07:42 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:58.835 03:07:42 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:58.835 03:07:42 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:58.835 03:07:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:58.835 ************************************ 00:04:58.835 START TEST rpc_plugins 00:04:58.835 ************************************ 00:04:58.835 03:07:42 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:58.835 03:07:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:58.835 03:07:42 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.835 03:07:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.835 03:07:42 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.835 03:07:42 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:58.835 03:07:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:58.835 03:07:42 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:58.835 03:07:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:58.835 03:07:42 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:58.835 03:07:42 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:58.835 { 00:04:58.835 "name": "Malloc1", 00:04:58.835 "aliases": [ 00:04:58.835 "565f17ec-d0c7-4d9c-a001-e92d72a4dccb" 00:04:58.835 ], 00:04:58.835 "product_name": "Malloc disk", 00:04:58.835 "block_size": 4096, 00:04:58.835 "num_blocks": 256, 00:04:58.835 "uuid": "565f17ec-d0c7-4d9c-a001-e92d72a4dccb", 00:04:58.835 "assigned_rate_limits": { 00:04:58.835 "rw_ios_per_sec": 0, 00:04:58.835 "rw_mbytes_per_sec": 0, 00:04:58.835 "r_mbytes_per_sec": 0, 00:04:58.835 "w_mbytes_per_sec": 0 00:04:58.835 }, 00:04:58.835 "claimed": false, 00:04:58.835 "zoned": false, 00:04:58.835 "supported_io_types": { 00:04:58.835 "read": true, 00:04:58.835 "write": true, 00:04:58.835 "unmap": true, 00:04:58.835 "flush": true, 00:04:58.835 "reset": true, 00:04:58.835 "nvme_admin": false, 00:04:58.835 "nvme_io": false, 00:04:58.835 "nvme_io_md": false, 00:04:58.835 "write_zeroes": true, 00:04:58.835 "zcopy": true, 00:04:58.835 "get_zone_info": false, 00:04:58.835 "zone_management": false, 00:04:58.835 "zone_append": false, 00:04:58.835 "compare": false, 00:04:58.835 "compare_and_write": false, 00:04:58.835 "abort": true, 00:04:58.835 "seek_hole": false, 00:04:58.835 "seek_data": false, 00:04:58.835 "copy": true, 00:04:58.835 "nvme_iov_md": false 00:04:58.835 }, 00:04:58.835 "memory_domains": [ 00:04:58.835 { 00:04:58.835 "dma_device_id": "system", 00:04:58.835 "dma_device_type": 1 00:04:58.835 }, 00:04:58.835 { 00:04:58.835 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:58.835 "dma_device_type": 2 00:04:58.835 } 00:04:58.835 ], 00:04:58.835 "driver_specific": {} 00:04:58.835 } 00:04:58.835 ]' 00:04:58.835 03:07:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:59.095 03:07:42 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:59.095 03:07:42 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:59.095 03:07:42 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.095 03:07:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:59.095 03:07:42 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.095 03:07:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:59.095 03:07:42 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.095 03:07:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:59.095 03:07:42 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.095 03:07:42 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:59.095 03:07:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:59.095 03:07:42 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:59.095 00:04:59.095 real 0m0.166s 00:04:59.095 user 0m0.102s 00:04:59.095 sys 0m0.019s 00:04:59.095 03:07:42 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.095 03:07:42 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:59.095 ************************************ 00:04:59.095 END TEST rpc_plugins 00:04:59.095 ************************************ 00:04:59.095 03:07:42 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:59.095 03:07:42 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.095 03:07:42 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.095 03:07:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.095 ************************************ 00:04:59.095 START TEST rpc_trace_cmd_test 00:04:59.095 ************************************ 00:04:59.095 03:07:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:59.095 03:07:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:59.095 03:07:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:59.095 03:07:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.095 03:07:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:59.095 03:07:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.095 03:07:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:59.095 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56925", 00:04:59.095 "tpoint_group_mask": "0x8", 00:04:59.095 "iscsi_conn": { 00:04:59.095 "mask": "0x2", 00:04:59.095 "tpoint_mask": "0x0" 00:04:59.095 }, 00:04:59.095 "scsi": { 00:04:59.095 "mask": "0x4", 00:04:59.095 "tpoint_mask": "0x0" 00:04:59.095 }, 00:04:59.095 "bdev": { 00:04:59.095 "mask": "0x8", 00:04:59.095 "tpoint_mask": "0xffffffffffffffff" 00:04:59.095 }, 00:04:59.095 "nvmf_rdma": { 00:04:59.095 "mask": "0x10", 00:04:59.095 "tpoint_mask": "0x0" 00:04:59.095 }, 00:04:59.095 "nvmf_tcp": { 00:04:59.095 "mask": "0x20", 00:04:59.095 "tpoint_mask": "0x0" 00:04:59.095 }, 00:04:59.095 "ftl": { 00:04:59.095 "mask": "0x40", 00:04:59.095 "tpoint_mask": "0x0" 00:04:59.095 }, 00:04:59.095 "blobfs": { 00:04:59.095 "mask": "0x80", 00:04:59.095 "tpoint_mask": "0x0" 00:04:59.095 }, 00:04:59.095 "dsa": { 00:04:59.095 "mask": "0x200", 00:04:59.095 "tpoint_mask": "0x0" 00:04:59.095 }, 00:04:59.095 "thread": { 00:04:59.095 "mask": "0x400", 00:04:59.095 "tpoint_mask": "0x0" 00:04:59.095 }, 00:04:59.095 "nvme_pcie": { 00:04:59.095 "mask": "0x800", 00:04:59.095 "tpoint_mask": "0x0" 00:04:59.095 }, 00:04:59.095 "iaa": { 00:04:59.095 "mask": "0x1000", 00:04:59.095 "tpoint_mask": "0x0" 00:04:59.095 }, 00:04:59.095 "nvme_tcp": { 00:04:59.095 "mask": "0x2000", 00:04:59.095 "tpoint_mask": "0x0" 00:04:59.095 }, 00:04:59.095 "bdev_nvme": { 00:04:59.095 "mask": "0x4000", 00:04:59.095 "tpoint_mask": "0x0" 00:04:59.095 }, 00:04:59.095 "sock": { 00:04:59.095 "mask": "0x8000", 00:04:59.095 "tpoint_mask": "0x0" 00:04:59.095 }, 00:04:59.095 "blob": { 00:04:59.096 "mask": "0x10000", 00:04:59.096 "tpoint_mask": "0x0" 00:04:59.096 }, 00:04:59.096 "bdev_raid": { 00:04:59.096 "mask": "0x20000", 00:04:59.096 "tpoint_mask": "0x0" 00:04:59.096 }, 00:04:59.096 "scheduler": { 00:04:59.096 "mask": "0x40000", 00:04:59.096 "tpoint_mask": "0x0" 00:04:59.096 } 00:04:59.096 }' 00:04:59.096 03:07:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:59.096 03:07:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:59.096 03:07:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:59.355 03:07:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:59.355 03:07:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:59.355 03:07:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:59.355 03:07:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:59.355 03:07:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:59.355 03:07:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:59.355 ************************************ 00:04:59.355 END TEST rpc_trace_cmd_test 00:04:59.355 ************************************ 00:04:59.355 03:07:42 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:59.355 00:04:59.355 real 0m0.263s 00:04:59.355 user 0m0.216s 00:04:59.355 sys 0m0.037s 00:04:59.355 03:07:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.355 03:07:42 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:59.355 03:07:42 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:59.355 03:07:42 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:59.355 03:07:42 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:59.355 03:07:42 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:59.355 03:07:42 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:59.355 03:07:42 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:59.355 ************************************ 00:04:59.355 START TEST rpc_daemon_integrity 00:04:59.355 ************************************ 00:04:59.355 03:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:59.355 03:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:59.355 03:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.355 03:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.355 03:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.355 03:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:59.355 03:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:59.614 03:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:59.614 03:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:59.614 03:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.614 03:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.615 03:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.615 03:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:59.615 03:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:59.615 03:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.615 03:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.615 03:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.615 03:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:59.615 { 00:04:59.615 "name": "Malloc2", 00:04:59.615 "aliases": [ 00:04:59.615 "fe2c4f0b-2022-4b42-afd1-683a5dc3788d" 00:04:59.615 ], 00:04:59.615 "product_name": "Malloc disk", 00:04:59.615 "block_size": 512, 00:04:59.615 "num_blocks": 16384, 00:04:59.615 "uuid": "fe2c4f0b-2022-4b42-afd1-683a5dc3788d", 00:04:59.615 "assigned_rate_limits": { 00:04:59.615 "rw_ios_per_sec": 0, 00:04:59.615 "rw_mbytes_per_sec": 0, 00:04:59.615 "r_mbytes_per_sec": 0, 00:04:59.615 "w_mbytes_per_sec": 0 00:04:59.615 }, 00:04:59.615 "claimed": false, 00:04:59.615 "zoned": false, 00:04:59.615 "supported_io_types": { 00:04:59.615 "read": true, 00:04:59.615 "write": true, 00:04:59.615 "unmap": true, 00:04:59.615 "flush": true, 00:04:59.615 "reset": true, 00:04:59.615 "nvme_admin": false, 00:04:59.615 "nvme_io": false, 00:04:59.615 "nvme_io_md": false, 00:04:59.615 "write_zeroes": true, 00:04:59.615 "zcopy": true, 00:04:59.615 "get_zone_info": false, 00:04:59.615 "zone_management": false, 00:04:59.615 "zone_append": false, 00:04:59.615 "compare": false, 00:04:59.615 "compare_and_write": false, 00:04:59.615 "abort": true, 00:04:59.615 "seek_hole": false, 00:04:59.615 "seek_data": false, 00:04:59.615 "copy": true, 00:04:59.615 "nvme_iov_md": false 00:04:59.615 }, 00:04:59.615 "memory_domains": [ 00:04:59.615 { 00:04:59.615 "dma_device_id": "system", 00:04:59.615 "dma_device_type": 1 00:04:59.615 }, 00:04:59.615 { 00:04:59.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.615 "dma_device_type": 2 00:04:59.615 } 00:04:59.615 ], 00:04:59.615 "driver_specific": {} 00:04:59.615 } 00:04:59.615 ]' 00:04:59.615 03:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:59.615 03:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:59.615 03:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:59.615 03:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.615 03:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.615 [2024-10-09 03:07:42.781351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:59.615 [2024-10-09 03:07:42.781423] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:59.615 [2024-10-09 03:07:42.781445] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:04:59.615 [2024-10-09 03:07:42.781458] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:59.615 [2024-10-09 03:07:42.783787] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:59.615 [2024-10-09 03:07:42.783827] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:59.615 Passthru0 00:04:59.615 03:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.615 03:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:59.615 03:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.615 03:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.615 03:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.615 03:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:59.615 { 00:04:59.615 "name": "Malloc2", 00:04:59.615 "aliases": [ 00:04:59.615 "fe2c4f0b-2022-4b42-afd1-683a5dc3788d" 00:04:59.615 ], 00:04:59.615 "product_name": "Malloc disk", 00:04:59.615 "block_size": 512, 00:04:59.615 "num_blocks": 16384, 00:04:59.615 "uuid": "fe2c4f0b-2022-4b42-afd1-683a5dc3788d", 00:04:59.615 "assigned_rate_limits": { 00:04:59.615 "rw_ios_per_sec": 0, 00:04:59.615 "rw_mbytes_per_sec": 0, 00:04:59.615 "r_mbytes_per_sec": 0, 00:04:59.615 "w_mbytes_per_sec": 0 00:04:59.615 }, 00:04:59.615 "claimed": true, 00:04:59.615 "claim_type": "exclusive_write", 00:04:59.615 "zoned": false, 00:04:59.615 "supported_io_types": { 00:04:59.615 "read": true, 00:04:59.615 "write": true, 00:04:59.615 "unmap": true, 00:04:59.615 "flush": true, 00:04:59.615 "reset": true, 00:04:59.615 "nvme_admin": false, 00:04:59.615 "nvme_io": false, 00:04:59.615 "nvme_io_md": false, 00:04:59.615 "write_zeroes": true, 00:04:59.615 "zcopy": true, 00:04:59.615 "get_zone_info": false, 00:04:59.615 "zone_management": false, 00:04:59.615 "zone_append": false, 00:04:59.615 "compare": false, 00:04:59.615 "compare_and_write": false, 00:04:59.615 "abort": true, 00:04:59.615 "seek_hole": false, 00:04:59.615 "seek_data": false, 00:04:59.615 "copy": true, 00:04:59.615 "nvme_iov_md": false 00:04:59.615 }, 00:04:59.615 "memory_domains": [ 00:04:59.615 { 00:04:59.615 "dma_device_id": "system", 00:04:59.615 "dma_device_type": 1 00:04:59.615 }, 00:04:59.615 { 00:04:59.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.615 "dma_device_type": 2 00:04:59.615 } 00:04:59.615 ], 00:04:59.615 "driver_specific": {} 00:04:59.615 }, 00:04:59.615 { 00:04:59.615 "name": "Passthru0", 00:04:59.615 "aliases": [ 00:04:59.615 "b8fe75b9-afdd-5cf1-a9fd-20631c40d425" 00:04:59.615 ], 00:04:59.615 "product_name": "passthru", 00:04:59.615 "block_size": 512, 00:04:59.615 "num_blocks": 16384, 00:04:59.615 "uuid": "b8fe75b9-afdd-5cf1-a9fd-20631c40d425", 00:04:59.615 "assigned_rate_limits": { 00:04:59.615 "rw_ios_per_sec": 0, 00:04:59.615 "rw_mbytes_per_sec": 0, 00:04:59.615 "r_mbytes_per_sec": 0, 00:04:59.615 "w_mbytes_per_sec": 0 00:04:59.615 }, 00:04:59.615 "claimed": false, 00:04:59.615 "zoned": false, 00:04:59.615 "supported_io_types": { 00:04:59.615 "read": true, 00:04:59.615 "write": true, 00:04:59.615 "unmap": true, 00:04:59.615 "flush": true, 00:04:59.615 "reset": true, 00:04:59.615 "nvme_admin": false, 00:04:59.615 "nvme_io": false, 00:04:59.615 "nvme_io_md": false, 00:04:59.615 "write_zeroes": true, 00:04:59.615 "zcopy": true, 00:04:59.615 "get_zone_info": false, 00:04:59.615 "zone_management": false, 00:04:59.615 "zone_append": false, 00:04:59.615 "compare": false, 00:04:59.615 "compare_and_write": false, 00:04:59.615 "abort": true, 00:04:59.615 "seek_hole": false, 00:04:59.615 "seek_data": false, 00:04:59.615 "copy": true, 00:04:59.615 "nvme_iov_md": false 00:04:59.615 }, 00:04:59.615 "memory_domains": [ 00:04:59.615 { 00:04:59.615 "dma_device_id": "system", 00:04:59.615 "dma_device_type": 1 00:04:59.615 }, 00:04:59.615 { 00:04:59.615 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:59.615 "dma_device_type": 2 00:04:59.615 } 00:04:59.615 ], 00:04:59.615 "driver_specific": { 00:04:59.615 "passthru": { 00:04:59.615 "name": "Passthru0", 00:04:59.615 "base_bdev_name": "Malloc2" 00:04:59.615 } 00:04:59.615 } 00:04:59.615 } 00:04:59.615 ]' 00:04:59.615 03:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:59.615 03:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:59.615 03:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:59.615 03:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.615 03:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.615 03:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.615 03:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:59.615 03:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.615 03:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.874 03:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.874 03:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:59.874 03:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:59.874 03:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.874 03:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:59.874 03:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:59.874 03:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:59.874 03:07:42 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:59.874 00:04:59.874 real 0m0.361s 00:04:59.874 user 0m0.203s 00:04:59.874 sys 0m0.057s 00:04:59.874 03:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:59.874 03:07:42 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:59.874 ************************************ 00:04:59.874 END TEST rpc_daemon_integrity 00:04:59.874 ************************************ 00:04:59.874 03:07:43 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:59.874 03:07:43 rpc -- rpc/rpc.sh@84 -- # killprocess 56925 00:04:59.874 03:07:43 rpc -- common/autotest_common.sh@950 -- # '[' -z 56925 ']' 00:04:59.874 03:07:43 rpc -- common/autotest_common.sh@954 -- # kill -0 56925 00:04:59.874 03:07:43 rpc -- common/autotest_common.sh@955 -- # uname 00:04:59.874 03:07:43 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:59.874 03:07:43 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 56925 00:04:59.874 03:07:43 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:59.874 03:07:43 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:59.874 killing process with pid 56925 00:04:59.874 03:07:43 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 56925' 00:04:59.874 03:07:43 rpc -- common/autotest_common.sh@969 -- # kill 56925 00:04:59.874 03:07:43 rpc -- common/autotest_common.sh@974 -- # wait 56925 00:05:02.409 00:05:02.409 real 0m5.544s 00:05:02.409 user 0m6.068s 00:05:02.409 sys 0m0.942s 00:05:02.409 03:07:45 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:02.409 03:07:45 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.409 ************************************ 00:05:02.409 END TEST rpc 00:05:02.409 ************************************ 00:05:02.409 03:07:45 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:02.409 03:07:45 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.409 03:07:45 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.409 03:07:45 -- common/autotest_common.sh@10 -- # set +x 00:05:02.409 ************************************ 00:05:02.409 START TEST skip_rpc 00:05:02.409 ************************************ 00:05:02.409 03:07:45 skip_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:02.668 * Looking for test storage... 00:05:02.668 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:02.668 03:07:45 skip_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:02.668 03:07:45 skip_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:02.668 03:07:45 skip_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:02.668 03:07:45 skip_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:02.668 03:07:45 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.668 03:07:45 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.668 03:07:45 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.668 03:07:45 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.668 03:07:45 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.668 03:07:45 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.668 03:07:45 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.668 03:07:45 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.668 03:07:45 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.668 03:07:45 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.668 03:07:45 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.668 03:07:45 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:02.668 03:07:45 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:02.668 03:07:45 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.668 03:07:45 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.668 03:07:45 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:02.668 03:07:45 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:02.668 03:07:45 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.668 03:07:45 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:02.668 03:07:45 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.668 03:07:45 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:02.668 03:07:45 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:02.668 03:07:45 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.668 03:07:45 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:02.668 03:07:45 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.668 03:07:45 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.668 03:07:45 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.668 03:07:45 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:02.668 03:07:45 skip_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.668 03:07:45 skip_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:02.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.668 --rc genhtml_branch_coverage=1 00:05:02.668 --rc genhtml_function_coverage=1 00:05:02.668 --rc genhtml_legend=1 00:05:02.668 --rc geninfo_all_blocks=1 00:05:02.668 --rc geninfo_unexecuted_blocks=1 00:05:02.668 00:05:02.668 ' 00:05:02.668 03:07:45 skip_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:02.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.668 --rc genhtml_branch_coverage=1 00:05:02.668 --rc genhtml_function_coverage=1 00:05:02.668 --rc genhtml_legend=1 00:05:02.668 --rc geninfo_all_blocks=1 00:05:02.668 --rc geninfo_unexecuted_blocks=1 00:05:02.668 00:05:02.668 ' 00:05:02.668 03:07:45 skip_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:02.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.668 --rc genhtml_branch_coverage=1 00:05:02.668 --rc genhtml_function_coverage=1 00:05:02.668 --rc genhtml_legend=1 00:05:02.668 --rc geninfo_all_blocks=1 00:05:02.668 --rc geninfo_unexecuted_blocks=1 00:05:02.669 00:05:02.669 ' 00:05:02.669 03:07:45 skip_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:02.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.669 --rc genhtml_branch_coverage=1 00:05:02.669 --rc genhtml_function_coverage=1 00:05:02.669 --rc genhtml_legend=1 00:05:02.669 --rc geninfo_all_blocks=1 00:05:02.669 --rc geninfo_unexecuted_blocks=1 00:05:02.669 00:05:02.669 ' 00:05:02.669 03:07:45 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:02.669 03:07:45 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:02.669 03:07:45 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:02.669 03:07:45 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:02.669 03:07:45 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:02.669 03:07:45 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:02.669 ************************************ 00:05:02.669 START TEST skip_rpc 00:05:02.669 ************************************ 00:05:02.669 03:07:45 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:02.669 03:07:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=57160 00:05:02.669 03:07:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:02.669 03:07:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:02.669 03:07:45 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:02.927 [2024-10-09 03:07:46.009416] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:02.927 [2024-10-09 03:07:46.009529] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57160 ] 00:05:02.927 [2024-10-09 03:07:46.173998] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.186 [2024-10-09 03:07:46.390973] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:08.485 03:07:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:08.485 03:07:50 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:08.485 03:07:50 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:08.486 03:07:50 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:08.486 03:07:50 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:08.486 03:07:50 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:08.486 03:07:50 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:08.486 03:07:50 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:08.486 03:07:50 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:08.486 03:07:50 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:08.486 03:07:50 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:08.486 03:07:50 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:08.486 03:07:50 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:08.486 03:07:50 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:08.486 03:07:50 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:08.486 03:07:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:08.486 03:07:50 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 57160 00:05:08.486 03:07:50 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 57160 ']' 00:05:08.486 03:07:50 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 57160 00:05:08.486 03:07:50 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:08.486 03:07:50 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:08.486 03:07:50 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57160 00:05:08.486 killing process with pid 57160 00:05:08.486 03:07:50 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:08.486 03:07:50 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:08.486 03:07:50 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57160' 00:05:08.486 03:07:50 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 57160 00:05:08.486 03:07:50 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 57160 00:05:10.392 ************************************ 00:05:10.392 END TEST skip_rpc 00:05:10.392 ************************************ 00:05:10.392 00:05:10.392 real 0m7.583s 00:05:10.392 user 0m7.087s 00:05:10.392 sys 0m0.413s 00:05:10.392 03:07:53 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.392 03:07:53 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.392 03:07:53 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:10.392 03:07:53 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.392 03:07:53 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.392 03:07:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.392 ************************************ 00:05:10.392 START TEST skip_rpc_with_json 00:05:10.392 ************************************ 00:05:10.392 03:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:10.392 03:07:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:10.392 03:07:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57270 00:05:10.392 03:07:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.392 03:07:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:10.392 03:07:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57270 00:05:10.392 03:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 57270 ']' 00:05:10.392 03:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.392 03:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:10.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.392 03:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.392 03:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:10.392 03:07:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:10.392 [2024-10-09 03:07:53.652972] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:10.392 [2024-10-09 03:07:53.653114] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57270 ] 00:05:10.651 [2024-10-09 03:07:53.817699] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.911 [2024-10-09 03:07:54.023854] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.850 03:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:11.850 03:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:11.850 03:07:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:11.850 03:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.850 03:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:11.850 [2024-10-09 03:07:54.838607] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:11.850 request: 00:05:11.850 { 00:05:11.850 "trtype": "tcp", 00:05:11.850 "method": "nvmf_get_transports", 00:05:11.850 "req_id": 1 00:05:11.850 } 00:05:11.850 Got JSON-RPC error response 00:05:11.850 response: 00:05:11.850 { 00:05:11.850 "code": -19, 00:05:11.850 "message": "No such device" 00:05:11.850 } 00:05:11.850 03:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:11.850 03:07:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:11.850 03:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.850 03:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:11.850 [2024-10-09 03:07:54.850689] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:11.850 03:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.850 03:07:54 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:11.850 03:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.850 03:07:54 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:11.850 03:07:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.850 03:07:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:11.850 { 00:05:11.850 "subsystems": [ 00:05:11.850 { 00:05:11.850 "subsystem": "fsdev", 00:05:11.850 "config": [ 00:05:11.850 { 00:05:11.850 "method": "fsdev_set_opts", 00:05:11.850 "params": { 00:05:11.850 "fsdev_io_pool_size": 65535, 00:05:11.850 "fsdev_io_cache_size": 256 00:05:11.850 } 00:05:11.850 } 00:05:11.850 ] 00:05:11.850 }, 00:05:11.850 { 00:05:11.850 "subsystem": "keyring", 00:05:11.850 "config": [] 00:05:11.850 }, 00:05:11.850 { 00:05:11.850 "subsystem": "iobuf", 00:05:11.850 "config": [ 00:05:11.850 { 00:05:11.850 "method": "iobuf_set_options", 00:05:11.850 "params": { 00:05:11.850 "small_pool_count": 8192, 00:05:11.850 "large_pool_count": 1024, 00:05:11.850 "small_bufsize": 8192, 00:05:11.850 "large_bufsize": 135168 00:05:11.850 } 00:05:11.850 } 00:05:11.850 ] 00:05:11.850 }, 00:05:11.850 { 00:05:11.850 "subsystem": "sock", 00:05:11.850 "config": [ 00:05:11.850 { 00:05:11.850 "method": "sock_set_default_impl", 00:05:11.850 "params": { 00:05:11.850 "impl_name": "posix" 00:05:11.850 } 00:05:11.850 }, 00:05:11.850 { 00:05:11.850 "method": "sock_impl_set_options", 00:05:11.850 "params": { 00:05:11.850 "impl_name": "ssl", 00:05:11.850 "recv_buf_size": 4096, 00:05:11.850 "send_buf_size": 4096, 00:05:11.850 "enable_recv_pipe": true, 00:05:11.850 "enable_quickack": false, 00:05:11.850 "enable_placement_id": 0, 00:05:11.850 "enable_zerocopy_send_server": true, 00:05:11.850 "enable_zerocopy_send_client": false, 00:05:11.850 "zerocopy_threshold": 0, 00:05:11.850 "tls_version": 0, 00:05:11.850 "enable_ktls": false 00:05:11.850 } 00:05:11.850 }, 00:05:11.850 { 00:05:11.850 "method": "sock_impl_set_options", 00:05:11.850 "params": { 00:05:11.850 "impl_name": "posix", 00:05:11.850 "recv_buf_size": 2097152, 00:05:11.850 "send_buf_size": 2097152, 00:05:11.850 "enable_recv_pipe": true, 00:05:11.850 "enable_quickack": false, 00:05:11.850 "enable_placement_id": 0, 00:05:11.850 "enable_zerocopy_send_server": true, 00:05:11.850 "enable_zerocopy_send_client": false, 00:05:11.850 "zerocopy_threshold": 0, 00:05:11.850 "tls_version": 0, 00:05:11.850 "enable_ktls": false 00:05:11.850 } 00:05:11.850 } 00:05:11.850 ] 00:05:11.850 }, 00:05:11.850 { 00:05:11.850 "subsystem": "vmd", 00:05:11.850 "config": [] 00:05:11.850 }, 00:05:11.850 { 00:05:11.850 "subsystem": "accel", 00:05:11.850 "config": [ 00:05:11.850 { 00:05:11.850 "method": "accel_set_options", 00:05:11.850 "params": { 00:05:11.850 "small_cache_size": 128, 00:05:11.850 "large_cache_size": 16, 00:05:11.850 "task_count": 2048, 00:05:11.850 "sequence_count": 2048, 00:05:11.850 "buf_count": 2048 00:05:11.850 } 00:05:11.850 } 00:05:11.850 ] 00:05:11.850 }, 00:05:11.850 { 00:05:11.850 "subsystem": "bdev", 00:05:11.850 "config": [ 00:05:11.850 { 00:05:11.850 "method": "bdev_set_options", 00:05:11.850 "params": { 00:05:11.850 "bdev_io_pool_size": 65535, 00:05:11.850 "bdev_io_cache_size": 256, 00:05:11.850 "bdev_auto_examine": true, 00:05:11.850 "iobuf_small_cache_size": 128, 00:05:11.850 "iobuf_large_cache_size": 16 00:05:11.850 } 00:05:11.850 }, 00:05:11.850 { 00:05:11.850 "method": "bdev_raid_set_options", 00:05:11.850 "params": { 00:05:11.850 "process_window_size_kb": 1024, 00:05:11.850 "process_max_bandwidth_mb_sec": 0 00:05:11.850 } 00:05:11.850 }, 00:05:11.850 { 00:05:11.850 "method": "bdev_iscsi_set_options", 00:05:11.850 "params": { 00:05:11.850 "timeout_sec": 30 00:05:11.850 } 00:05:11.850 }, 00:05:11.850 { 00:05:11.850 "method": "bdev_nvme_set_options", 00:05:11.850 "params": { 00:05:11.850 "action_on_timeout": "none", 00:05:11.850 "timeout_us": 0, 00:05:11.850 "timeout_admin_us": 0, 00:05:11.850 "keep_alive_timeout_ms": 10000, 00:05:11.850 "arbitration_burst": 0, 00:05:11.850 "low_priority_weight": 0, 00:05:11.850 "medium_priority_weight": 0, 00:05:11.850 "high_priority_weight": 0, 00:05:11.850 "nvme_adminq_poll_period_us": 10000, 00:05:11.850 "nvme_ioq_poll_period_us": 0, 00:05:11.850 "io_queue_requests": 0, 00:05:11.850 "delay_cmd_submit": true, 00:05:11.850 "transport_retry_count": 4, 00:05:11.850 "bdev_retry_count": 3, 00:05:11.850 "transport_ack_timeout": 0, 00:05:11.850 "ctrlr_loss_timeout_sec": 0, 00:05:11.850 "reconnect_delay_sec": 0, 00:05:11.850 "fast_io_fail_timeout_sec": 0, 00:05:11.850 "disable_auto_failback": false, 00:05:11.850 "generate_uuids": false, 00:05:11.850 "transport_tos": 0, 00:05:11.850 "nvme_error_stat": false, 00:05:11.850 "rdma_srq_size": 0, 00:05:11.850 "io_path_stat": false, 00:05:11.850 "allow_accel_sequence": false, 00:05:11.850 "rdma_max_cq_size": 0, 00:05:11.850 "rdma_cm_event_timeout_ms": 0, 00:05:11.850 "dhchap_digests": [ 00:05:11.850 "sha256", 00:05:11.850 "sha384", 00:05:11.850 "sha512" 00:05:11.850 ], 00:05:11.850 "dhchap_dhgroups": [ 00:05:11.850 "null", 00:05:11.850 "ffdhe2048", 00:05:11.850 "ffdhe3072", 00:05:11.850 "ffdhe4096", 00:05:11.850 "ffdhe6144", 00:05:11.850 "ffdhe8192" 00:05:11.850 ] 00:05:11.850 } 00:05:11.850 }, 00:05:11.850 { 00:05:11.850 "method": "bdev_nvme_set_hotplug", 00:05:11.850 "params": { 00:05:11.850 "period_us": 100000, 00:05:11.850 "enable": false 00:05:11.850 } 00:05:11.850 }, 00:05:11.850 { 00:05:11.850 "method": "bdev_wait_for_examine" 00:05:11.850 } 00:05:11.850 ] 00:05:11.850 }, 00:05:11.850 { 00:05:11.850 "subsystem": "scsi", 00:05:11.850 "config": null 00:05:11.850 }, 00:05:11.850 { 00:05:11.850 "subsystem": "scheduler", 00:05:11.850 "config": [ 00:05:11.850 { 00:05:11.850 "method": "framework_set_scheduler", 00:05:11.850 "params": { 00:05:11.850 "name": "static" 00:05:11.850 } 00:05:11.850 } 00:05:11.850 ] 00:05:11.850 }, 00:05:11.850 { 00:05:11.850 "subsystem": "vhost_scsi", 00:05:11.850 "config": [] 00:05:11.850 }, 00:05:11.850 { 00:05:11.850 "subsystem": "vhost_blk", 00:05:11.850 "config": [] 00:05:11.850 }, 00:05:11.850 { 00:05:11.850 "subsystem": "ublk", 00:05:11.850 "config": [] 00:05:11.850 }, 00:05:11.850 { 00:05:11.850 "subsystem": "nbd", 00:05:11.850 "config": [] 00:05:11.850 }, 00:05:11.850 { 00:05:11.850 "subsystem": "nvmf", 00:05:11.850 "config": [ 00:05:11.850 { 00:05:11.850 "method": "nvmf_set_config", 00:05:11.850 "params": { 00:05:11.850 "discovery_filter": "match_any", 00:05:11.851 "admin_cmd_passthru": { 00:05:11.851 "identify_ctrlr": false 00:05:11.851 }, 00:05:11.851 "dhchap_digests": [ 00:05:11.851 "sha256", 00:05:11.851 "sha384", 00:05:11.851 "sha512" 00:05:11.851 ], 00:05:11.851 "dhchap_dhgroups": [ 00:05:11.851 "null", 00:05:11.851 "ffdhe2048", 00:05:11.851 "ffdhe3072", 00:05:11.851 "ffdhe4096", 00:05:11.851 "ffdhe6144", 00:05:11.851 "ffdhe8192" 00:05:11.851 ] 00:05:11.851 } 00:05:11.851 }, 00:05:11.851 { 00:05:11.851 "method": "nvmf_set_max_subsystems", 00:05:11.851 "params": { 00:05:11.851 "max_subsystems": 1024 00:05:11.851 } 00:05:11.851 }, 00:05:11.851 { 00:05:11.851 "method": "nvmf_set_crdt", 00:05:11.851 "params": { 00:05:11.851 "crdt1": 0, 00:05:11.851 "crdt2": 0, 00:05:11.851 "crdt3": 0 00:05:11.851 } 00:05:11.851 }, 00:05:11.851 { 00:05:11.851 "method": "nvmf_create_transport", 00:05:11.851 "params": { 00:05:11.851 "trtype": "TCP", 00:05:11.851 "max_queue_depth": 128, 00:05:11.851 "max_io_qpairs_per_ctrlr": 127, 00:05:11.851 "in_capsule_data_size": 4096, 00:05:11.851 "max_io_size": 131072, 00:05:11.851 "io_unit_size": 131072, 00:05:11.851 "max_aq_depth": 128, 00:05:11.851 "num_shared_buffers": 511, 00:05:11.851 "buf_cache_size": 4294967295, 00:05:11.851 "dif_insert_or_strip": false, 00:05:11.851 "zcopy": false, 00:05:11.851 "c2h_success": true, 00:05:11.851 "sock_priority": 0, 00:05:11.851 "abort_timeout_sec": 1, 00:05:11.851 "ack_timeout": 0, 00:05:11.851 "data_wr_pool_size": 0 00:05:11.851 } 00:05:11.851 } 00:05:11.851 ] 00:05:11.851 }, 00:05:11.851 { 00:05:11.851 "subsystem": "iscsi", 00:05:11.851 "config": [ 00:05:11.851 { 00:05:11.851 "method": "iscsi_set_options", 00:05:11.851 "params": { 00:05:11.851 "node_base": "iqn.2016-06.io.spdk", 00:05:11.851 "max_sessions": 128, 00:05:11.851 "max_connections_per_session": 2, 00:05:11.851 "max_queue_depth": 64, 00:05:11.851 "default_time2wait": 2, 00:05:11.851 "default_time2retain": 20, 00:05:11.851 "first_burst_length": 8192, 00:05:11.851 "immediate_data": true, 00:05:11.851 "allow_duplicated_isid": false, 00:05:11.851 "error_recovery_level": 0, 00:05:11.851 "nop_timeout": 60, 00:05:11.851 "nop_in_interval": 30, 00:05:11.851 "disable_chap": false, 00:05:11.851 "require_chap": false, 00:05:11.851 "mutual_chap": false, 00:05:11.851 "chap_group": 0, 00:05:11.851 "max_large_datain_per_connection": 64, 00:05:11.851 "max_r2t_per_connection": 4, 00:05:11.851 "pdu_pool_size": 36864, 00:05:11.851 "immediate_data_pool_size": 16384, 00:05:11.851 "data_out_pool_size": 2048 00:05:11.851 } 00:05:11.851 } 00:05:11.851 ] 00:05:11.851 } 00:05:11.851 ] 00:05:11.851 } 00:05:11.851 03:07:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:11.851 03:07:55 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57270 00:05:11.851 03:07:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57270 ']' 00:05:11.851 03:07:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57270 00:05:11.851 03:07:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:11.851 03:07:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:11.851 03:07:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57270 00:05:11.851 03:07:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:11.851 03:07:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:11.851 killing process with pid 57270 00:05:11.851 03:07:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57270' 00:05:11.851 03:07:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57270 00:05:11.851 03:07:55 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57270 00:05:14.390 03:07:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57326 00:05:14.390 03:07:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:14.390 03:07:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:19.676 03:08:02 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57326 00:05:19.676 03:08:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 57326 ']' 00:05:19.676 03:08:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 57326 00:05:19.676 03:08:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:19.676 03:08:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:19.676 03:08:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57326 00:05:19.676 03:08:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:19.676 03:08:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:19.676 03:08:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57326' 00:05:19.676 killing process with pid 57326 00:05:19.676 03:08:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 57326 00:05:19.676 03:08:02 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 57326 00:05:22.211 03:08:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:22.211 03:08:05 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:22.211 00:05:22.211 real 0m11.566s 00:05:22.211 user 0m10.964s 00:05:22.211 sys 0m0.871s 00:05:22.211 03:08:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.211 03:08:05 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.211 ************************************ 00:05:22.211 END TEST skip_rpc_with_json 00:05:22.211 ************************************ 00:05:22.211 03:08:05 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:22.211 03:08:05 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.211 03:08:05 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.211 03:08:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.211 ************************************ 00:05:22.211 START TEST skip_rpc_with_delay 00:05:22.211 ************************************ 00:05:22.211 03:08:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:22.211 03:08:05 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:22.211 03:08:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:22.211 03:08:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:22.211 03:08:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:22.211 03:08:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:22.211 03:08:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:22.211 03:08:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:22.211 03:08:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:22.211 03:08:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:22.211 03:08:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:22.211 03:08:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:22.211 03:08:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:22.211 [2024-10-09 03:08:05.287293] app.c: 840:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:22.211 [2024-10-09 03:08:05.287415] app.c: 719:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:22.211 03:08:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:22.211 03:08:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:22.211 03:08:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:22.211 03:08:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:22.211 00:05:22.211 real 0m0.167s 00:05:22.211 user 0m0.094s 00:05:22.211 sys 0m0.072s 00:05:22.211 03:08:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:22.211 03:08:05 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:22.211 ************************************ 00:05:22.211 END TEST skip_rpc_with_delay 00:05:22.211 ************************************ 00:05:22.211 03:08:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:22.211 03:08:05 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:22.211 03:08:05 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:22.211 03:08:05 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:22.211 03:08:05 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:22.211 03:08:05 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.211 ************************************ 00:05:22.211 START TEST exit_on_failed_rpc_init 00:05:22.211 ************************************ 00:05:22.211 03:08:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:22.211 03:08:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:22.211 03:08:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57454 00:05:22.211 03:08:05 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57454 00:05:22.211 03:08:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 57454 ']' 00:05:22.211 03:08:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.211 03:08:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:22.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.211 03:08:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.211 03:08:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:22.211 03:08:05 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:22.470 [2024-10-09 03:08:05.524888] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:22.470 [2024-10-09 03:08:05.525031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57454 ] 00:05:22.470 [2024-10-09 03:08:05.693717] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.740 [2024-10-09 03:08:05.912399] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.682 03:08:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:23.682 03:08:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:23.682 03:08:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.682 03:08:06 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:23.682 03:08:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:23.682 03:08:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:23.682 03:08:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:23.682 03:08:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:23.682 03:08:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:23.682 03:08:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:23.682 03:08:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:23.682 03:08:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:23.682 03:08:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:23.682 03:08:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:05:23.682 03:08:06 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:05:23.682 [2024-10-09 03:08:06.900258] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:23.682 [2024-10-09 03:08:06.900391] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57483 ] 00:05:23.941 [2024-10-09 03:08:07.064078] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.201 [2024-10-09 03:08:07.282742] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:24.201 [2024-10-09 03:08:07.282855] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:24.201 [2024-10-09 03:08:07.282870] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:24.201 [2024-10-09 03:08:07.282881] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:24.460 03:08:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:24.460 03:08:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:24.460 03:08:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:24.460 03:08:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:24.460 03:08:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:24.460 03:08:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:24.460 03:08:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:24.461 03:08:07 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57454 00:05:24.461 03:08:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 57454 ']' 00:05:24.461 03:08:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 57454 00:05:24.461 03:08:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:24.461 03:08:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:24.461 03:08:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57454 00:05:24.461 03:08:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:24.461 03:08:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:24.461 killing process with pid 57454 00:05:24.461 03:08:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57454' 00:05:24.461 03:08:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 57454 00:05:24.461 03:08:07 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 57454 00:05:27.014 00:05:27.014 real 0m4.848s 00:05:27.014 user 0m5.399s 00:05:27.014 sys 0m0.573s 00:05:27.014 03:08:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.014 03:08:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:27.014 ************************************ 00:05:27.014 END TEST exit_on_failed_rpc_init 00:05:27.014 ************************************ 00:05:27.273 03:08:10 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:27.273 00:05:27.273 real 0m24.650s 00:05:27.273 user 0m23.745s 00:05:27.273 sys 0m2.232s 00:05:27.273 03:08:10 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.273 03:08:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.273 ************************************ 00:05:27.273 END TEST skip_rpc 00:05:27.273 ************************************ 00:05:27.273 03:08:10 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:27.273 03:08:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.273 03:08:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.273 03:08:10 -- common/autotest_common.sh@10 -- # set +x 00:05:27.273 ************************************ 00:05:27.273 START TEST rpc_client 00:05:27.273 ************************************ 00:05:27.273 03:08:10 rpc_client -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:05:27.273 * Looking for test storage... 00:05:27.273 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:05:27.273 03:08:10 rpc_client -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:27.273 03:08:10 rpc_client -- common/autotest_common.sh@1681 -- # lcov --version 00:05:27.273 03:08:10 rpc_client -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:27.532 03:08:10 rpc_client -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:27.532 03:08:10 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.532 03:08:10 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.532 03:08:10 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.532 03:08:10 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.532 03:08:10 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.532 03:08:10 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.532 03:08:10 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.532 03:08:10 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.532 03:08:10 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.532 03:08:10 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.532 03:08:10 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.532 03:08:10 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:27.532 03:08:10 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:27.532 03:08:10 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.532 03:08:10 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.532 03:08:10 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:27.532 03:08:10 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:27.532 03:08:10 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.532 03:08:10 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:27.532 03:08:10 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.532 03:08:10 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:27.532 03:08:10 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:27.532 03:08:10 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.532 03:08:10 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:27.532 03:08:10 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.532 03:08:10 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.532 03:08:10 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.532 03:08:10 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:27.532 03:08:10 rpc_client -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.532 03:08:10 rpc_client -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:27.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.532 --rc genhtml_branch_coverage=1 00:05:27.532 --rc genhtml_function_coverage=1 00:05:27.532 --rc genhtml_legend=1 00:05:27.532 --rc geninfo_all_blocks=1 00:05:27.532 --rc geninfo_unexecuted_blocks=1 00:05:27.532 00:05:27.532 ' 00:05:27.532 03:08:10 rpc_client -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:27.532 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.532 --rc genhtml_branch_coverage=1 00:05:27.532 --rc genhtml_function_coverage=1 00:05:27.532 --rc genhtml_legend=1 00:05:27.532 --rc geninfo_all_blocks=1 00:05:27.532 --rc geninfo_unexecuted_blocks=1 00:05:27.532 00:05:27.532 ' 00:05:27.533 03:08:10 rpc_client -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:27.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.533 --rc genhtml_branch_coverage=1 00:05:27.533 --rc genhtml_function_coverage=1 00:05:27.533 --rc genhtml_legend=1 00:05:27.533 --rc geninfo_all_blocks=1 00:05:27.533 --rc geninfo_unexecuted_blocks=1 00:05:27.533 00:05:27.533 ' 00:05:27.533 03:08:10 rpc_client -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:27.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.533 --rc genhtml_branch_coverage=1 00:05:27.533 --rc genhtml_function_coverage=1 00:05:27.533 --rc genhtml_legend=1 00:05:27.533 --rc geninfo_all_blocks=1 00:05:27.533 --rc geninfo_unexecuted_blocks=1 00:05:27.533 00:05:27.533 ' 00:05:27.533 03:08:10 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:05:27.533 OK 00:05:27.533 03:08:10 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:27.533 00:05:27.533 real 0m0.290s 00:05:27.533 user 0m0.157s 00:05:27.533 sys 0m0.151s 00:05:27.533 03:08:10 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.533 03:08:10 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:27.533 ************************************ 00:05:27.533 END TEST rpc_client 00:05:27.533 ************************************ 00:05:27.533 03:08:10 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:27.533 03:08:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.533 03:08:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.533 03:08:10 -- common/autotest_common.sh@10 -- # set +x 00:05:27.533 ************************************ 00:05:27.533 START TEST json_config 00:05:27.533 ************************************ 00:05:27.533 03:08:10 json_config -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:05:27.533 03:08:10 json_config -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:27.533 03:08:10 json_config -- common/autotest_common.sh@1681 -- # lcov --version 00:05:27.533 03:08:10 json_config -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:27.792 03:08:10 json_config -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:27.792 03:08:10 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:27.792 03:08:10 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:27.792 03:08:10 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:27.792 03:08:10 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:27.792 03:08:10 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:27.792 03:08:10 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:27.792 03:08:10 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:27.792 03:08:10 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:27.792 03:08:10 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:27.792 03:08:10 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:27.792 03:08:10 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:27.792 03:08:10 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:27.792 03:08:10 json_config -- scripts/common.sh@345 -- # : 1 00:05:27.792 03:08:10 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:27.792 03:08:10 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:27.792 03:08:10 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:27.792 03:08:10 json_config -- scripts/common.sh@353 -- # local d=1 00:05:27.792 03:08:10 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:27.792 03:08:10 json_config -- scripts/common.sh@355 -- # echo 1 00:05:27.792 03:08:10 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:27.792 03:08:10 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:27.792 03:08:10 json_config -- scripts/common.sh@353 -- # local d=2 00:05:27.792 03:08:10 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:27.792 03:08:10 json_config -- scripts/common.sh@355 -- # echo 2 00:05:27.792 03:08:10 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:27.792 03:08:10 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:27.792 03:08:10 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:27.792 03:08:10 json_config -- scripts/common.sh@368 -- # return 0 00:05:27.792 03:08:10 json_config -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:27.792 03:08:10 json_config -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:27.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.792 --rc genhtml_branch_coverage=1 00:05:27.792 --rc genhtml_function_coverage=1 00:05:27.792 --rc genhtml_legend=1 00:05:27.792 --rc geninfo_all_blocks=1 00:05:27.792 --rc geninfo_unexecuted_blocks=1 00:05:27.792 00:05:27.792 ' 00:05:27.792 03:08:10 json_config -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:27.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.792 --rc genhtml_branch_coverage=1 00:05:27.792 --rc genhtml_function_coverage=1 00:05:27.792 --rc genhtml_legend=1 00:05:27.792 --rc geninfo_all_blocks=1 00:05:27.792 --rc geninfo_unexecuted_blocks=1 00:05:27.792 00:05:27.792 ' 00:05:27.792 03:08:10 json_config -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:27.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.792 --rc genhtml_branch_coverage=1 00:05:27.792 --rc genhtml_function_coverage=1 00:05:27.792 --rc genhtml_legend=1 00:05:27.792 --rc geninfo_all_blocks=1 00:05:27.792 --rc geninfo_unexecuted_blocks=1 00:05:27.792 00:05:27.792 ' 00:05:27.792 03:08:10 json_config -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:27.792 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:27.792 --rc genhtml_branch_coverage=1 00:05:27.792 --rc genhtml_function_coverage=1 00:05:27.792 --rc genhtml_legend=1 00:05:27.792 --rc geninfo_all_blocks=1 00:05:27.792 --rc geninfo_unexecuted_blocks=1 00:05:27.792 00:05:27.792 ' 00:05:27.792 03:08:10 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:27.792 03:08:10 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:27.792 03:08:10 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:27.792 03:08:10 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:27.792 03:08:10 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:27.792 03:08:10 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:27.792 03:08:10 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:27.792 03:08:10 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:27.792 03:08:10 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:27.792 03:08:10 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:27.792 03:08:10 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:27.792 03:08:10 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:27.792 03:08:10 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ebf1727-052a-45f1-8522-0162d29da5c7 00:05:27.792 03:08:10 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=9ebf1727-052a-45f1-8522-0162d29da5c7 00:05:27.792 03:08:10 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:27.792 03:08:10 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:27.792 03:08:10 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:27.792 03:08:10 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:27.792 03:08:10 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:27.792 03:08:10 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:27.792 03:08:10 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:27.792 03:08:10 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:27.792 03:08:10 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:27.792 03:08:10 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.792 03:08:10 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.792 03:08:10 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.792 03:08:10 json_config -- paths/export.sh@5 -- # export PATH 00:05:27.792 03:08:10 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:27.792 03:08:10 json_config -- nvmf/common.sh@51 -- # : 0 00:05:27.792 03:08:10 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:27.792 03:08:10 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:27.792 03:08:10 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:27.792 03:08:10 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:27.792 03:08:10 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:27.792 03:08:10 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:27.792 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:27.792 03:08:10 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:27.792 03:08:10 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:27.792 03:08:10 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:27.792 03:08:10 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:27.792 03:08:10 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:27.792 03:08:10 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:27.792 03:08:10 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:27.792 03:08:10 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:27.792 WARNING: No tests are enabled so not running JSON configuration tests 00:05:27.792 03:08:10 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:05:27.792 03:08:10 json_config -- json_config/json_config.sh@28 -- # exit 0 00:05:27.792 00:05:27.792 real 0m0.220s 00:05:27.792 user 0m0.133s 00:05:27.792 sys 0m0.096s 00:05:27.792 03:08:10 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.793 03:08:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:27.793 ************************************ 00:05:27.793 END TEST json_config 00:05:27.793 ************************************ 00:05:27.793 03:08:11 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:27.793 03:08:11 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.793 03:08:11 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.793 03:08:11 -- common/autotest_common.sh@10 -- # set +x 00:05:27.793 ************************************ 00:05:27.793 START TEST json_config_extra_key 00:05:27.793 ************************************ 00:05:27.793 03:08:11 json_config_extra_key -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:05:28.052 03:08:11 json_config_extra_key -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:28.052 03:08:11 json_config_extra_key -- common/autotest_common.sh@1681 -- # lcov --version 00:05:28.052 03:08:11 json_config_extra_key -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:28.052 03:08:11 json_config_extra_key -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:28.052 03:08:11 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:28.052 03:08:11 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:28.052 03:08:11 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:28.052 03:08:11 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:28.052 03:08:11 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:28.052 03:08:11 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:28.052 03:08:11 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:28.052 03:08:11 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:28.052 03:08:11 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:28.052 03:08:11 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:28.052 03:08:11 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:28.052 03:08:11 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:28.052 03:08:11 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:28.052 03:08:11 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:28.052 03:08:11 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:28.052 03:08:11 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:28.052 03:08:11 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:28.052 03:08:11 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:28.052 03:08:11 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:28.052 03:08:11 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:28.052 03:08:11 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:28.052 03:08:11 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:28.052 03:08:11 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:28.052 03:08:11 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:28.052 03:08:11 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:28.052 03:08:11 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:28.052 03:08:11 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:28.052 03:08:11 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:28.052 03:08:11 json_config_extra_key -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:28.052 03:08:11 json_config_extra_key -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:28.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.052 --rc genhtml_branch_coverage=1 00:05:28.052 --rc genhtml_function_coverage=1 00:05:28.052 --rc genhtml_legend=1 00:05:28.052 --rc geninfo_all_blocks=1 00:05:28.052 --rc geninfo_unexecuted_blocks=1 00:05:28.052 00:05:28.052 ' 00:05:28.052 03:08:11 json_config_extra_key -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:28.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.052 --rc genhtml_branch_coverage=1 00:05:28.052 --rc genhtml_function_coverage=1 00:05:28.052 --rc genhtml_legend=1 00:05:28.052 --rc geninfo_all_blocks=1 00:05:28.052 --rc geninfo_unexecuted_blocks=1 00:05:28.052 00:05:28.052 ' 00:05:28.052 03:08:11 json_config_extra_key -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:28.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.052 --rc genhtml_branch_coverage=1 00:05:28.052 --rc genhtml_function_coverage=1 00:05:28.052 --rc genhtml_legend=1 00:05:28.052 --rc geninfo_all_blocks=1 00:05:28.052 --rc geninfo_unexecuted_blocks=1 00:05:28.052 00:05:28.052 ' 00:05:28.052 03:08:11 json_config_extra_key -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:28.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:28.052 --rc genhtml_branch_coverage=1 00:05:28.052 --rc genhtml_function_coverage=1 00:05:28.052 --rc genhtml_legend=1 00:05:28.052 --rc geninfo_all_blocks=1 00:05:28.052 --rc geninfo_unexecuted_blocks=1 00:05:28.052 00:05:28.052 ' 00:05:28.052 03:08:11 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:05:28.052 03:08:11 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:28.052 03:08:11 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:28.052 03:08:11 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:28.052 03:08:11 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:28.052 03:08:11 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:28.052 03:08:11 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:28.052 03:08:11 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:28.052 03:08:11 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:28.052 03:08:11 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:28.052 03:08:11 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:28.052 03:08:11 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:28.052 03:08:11 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:9ebf1727-052a-45f1-8522-0162d29da5c7 00:05:28.052 03:08:11 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=9ebf1727-052a-45f1-8522-0162d29da5c7 00:05:28.052 03:08:11 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:28.052 03:08:11 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:28.052 03:08:11 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:28.052 03:08:11 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:28.052 03:08:11 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:05:28.052 03:08:11 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:28.052 03:08:11 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:28.052 03:08:11 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:28.052 03:08:11 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:28.053 03:08:11 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.053 03:08:11 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.053 03:08:11 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.053 03:08:11 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:28.053 03:08:11 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:28.053 03:08:11 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:28.053 03:08:11 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:28.053 03:08:11 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:28.053 03:08:11 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:28.053 03:08:11 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:28.053 03:08:11 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:28.053 03:08:11 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:28.053 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:28.053 03:08:11 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:28.053 03:08:11 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:28.053 03:08:11 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:28.053 03:08:11 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:05:28.053 03:08:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:28.053 03:08:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:28.053 03:08:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:28.053 03:08:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:28.053 03:08:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:28.053 03:08:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:28.053 03:08:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:05:28.053 03:08:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:28.053 03:08:11 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:28.053 INFO: launching applications... 00:05:28.053 03:08:11 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:28.053 03:08:11 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:28.053 03:08:11 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:28.053 03:08:11 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:28.053 03:08:11 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:28.053 03:08:11 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:28.053 03:08:11 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:28.053 03:08:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.053 03:08:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:28.053 03:08:11 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57693 00:05:28.053 Waiting for target to run... 00:05:28.053 03:08:11 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:28.053 03:08:11 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57693 /var/tmp/spdk_tgt.sock 00:05:28.053 03:08:11 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 57693 ']' 00:05:28.053 03:08:11 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:05:28.053 03:08:11 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:28.053 03:08:11 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:28.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:28.053 03:08:11 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:28.053 03:08:11 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:28.053 03:08:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:28.053 [2024-10-09 03:08:11.338980] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:28.053 [2024-10-09 03:08:11.339124] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57693 ] 00:05:28.621 [2024-10-09 03:08:11.713156] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.621 [2024-10-09 03:08:11.909334] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.556 03:08:12 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:29.556 03:08:12 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:05:29.556 00:05:29.556 03:08:12 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:29.556 INFO: shutting down applications... 00:05:29.556 03:08:12 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:29.556 03:08:12 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:29.556 03:08:12 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:29.556 03:08:12 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:29.556 03:08:12 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57693 ]] 00:05:29.556 03:08:12 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57693 00:05:29.556 03:08:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:29.556 03:08:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:29.556 03:08:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57693 00:05:29.556 03:08:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:30.127 03:08:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:30.127 03:08:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:30.127 03:08:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57693 00:05:30.127 03:08:13 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:30.385 03:08:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:30.385 03:08:13 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:30.385 03:08:13 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57693 00:05:30.385 03:08:13 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:30.949 03:08:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:30.949 03:08:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:30.949 03:08:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57693 00:05:30.949 03:08:14 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:31.514 03:08:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:31.514 03:08:14 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:31.514 03:08:14 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57693 00:05:31.514 03:08:14 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:32.079 03:08:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:32.079 03:08:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:32.079 03:08:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57693 00:05:32.079 03:08:15 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:32.646 03:08:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:32.646 03:08:15 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:32.646 03:08:15 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57693 00:05:32.646 03:08:15 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:32.646 03:08:15 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:32.646 03:08:15 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:32.646 SPDK target shutdown done 00:05:32.646 03:08:15 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:32.646 Success 00:05:32.646 03:08:15 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:32.646 00:05:32.646 real 0m4.690s 00:05:32.646 user 0m4.292s 00:05:32.646 sys 0m0.528s 00:05:32.646 03:08:15 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.646 03:08:15 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:32.646 ************************************ 00:05:32.646 END TEST json_config_extra_key 00:05:32.646 ************************************ 00:05:32.646 03:08:15 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:32.646 03:08:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.646 03:08:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.646 03:08:15 -- common/autotest_common.sh@10 -- # set +x 00:05:32.646 ************************************ 00:05:32.646 START TEST alias_rpc 00:05:32.646 ************************************ 00:05:32.646 03:08:15 alias_rpc -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:32.646 * Looking for test storage... 00:05:32.646 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:05:32.646 03:08:15 alias_rpc -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:32.646 03:08:15 alias_rpc -- common/autotest_common.sh@1681 -- # lcov --version 00:05:32.646 03:08:15 alias_rpc -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:32.904 03:08:15 alias_rpc -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:32.904 03:08:15 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:32.904 03:08:15 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:32.904 03:08:15 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:32.904 03:08:15 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:32.904 03:08:15 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:32.904 03:08:15 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:32.904 03:08:15 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:32.904 03:08:15 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:32.904 03:08:15 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:32.904 03:08:15 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:32.904 03:08:15 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:32.904 03:08:15 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:32.904 03:08:15 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:32.904 03:08:15 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:32.904 03:08:15 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:32.904 03:08:15 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:32.904 03:08:15 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:32.904 03:08:15 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:32.904 03:08:15 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:32.904 03:08:15 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:32.904 03:08:15 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:32.904 03:08:15 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:32.904 03:08:15 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:32.904 03:08:15 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:32.904 03:08:15 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:32.904 03:08:15 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:32.904 03:08:15 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:32.904 03:08:15 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:32.904 03:08:15 alias_rpc -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:32.904 03:08:15 alias_rpc -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:32.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.904 --rc genhtml_branch_coverage=1 00:05:32.904 --rc genhtml_function_coverage=1 00:05:32.904 --rc genhtml_legend=1 00:05:32.904 --rc geninfo_all_blocks=1 00:05:32.904 --rc geninfo_unexecuted_blocks=1 00:05:32.904 00:05:32.904 ' 00:05:32.904 03:08:15 alias_rpc -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:32.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.904 --rc genhtml_branch_coverage=1 00:05:32.904 --rc genhtml_function_coverage=1 00:05:32.904 --rc genhtml_legend=1 00:05:32.904 --rc geninfo_all_blocks=1 00:05:32.904 --rc geninfo_unexecuted_blocks=1 00:05:32.904 00:05:32.904 ' 00:05:32.904 03:08:15 alias_rpc -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:32.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.904 --rc genhtml_branch_coverage=1 00:05:32.904 --rc genhtml_function_coverage=1 00:05:32.904 --rc genhtml_legend=1 00:05:32.904 --rc geninfo_all_blocks=1 00:05:32.904 --rc geninfo_unexecuted_blocks=1 00:05:32.904 00:05:32.904 ' 00:05:32.904 03:08:15 alias_rpc -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:32.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:32.905 --rc genhtml_branch_coverage=1 00:05:32.905 --rc genhtml_function_coverage=1 00:05:32.905 --rc genhtml_legend=1 00:05:32.905 --rc geninfo_all_blocks=1 00:05:32.905 --rc geninfo_unexecuted_blocks=1 00:05:32.905 00:05:32.905 ' 00:05:32.905 03:08:15 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:32.905 03:08:15 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57806 00:05:32.905 03:08:15 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:32.905 03:08:15 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57806 00:05:32.905 03:08:15 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 57806 ']' 00:05:32.905 03:08:16 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:32.905 03:08:16 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:32.905 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:32.905 03:08:16 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:32.905 03:08:16 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:32.905 03:08:16 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.905 [2024-10-09 03:08:16.085714] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:32.905 [2024-10-09 03:08:16.085829] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57806 ] 00:05:33.162 [2024-10-09 03:08:16.248947] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.162 [2024-10-09 03:08:16.464656] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.098 03:08:17 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:34.098 03:08:17 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:34.098 03:08:17 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:05:34.357 03:08:17 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57806 00:05:34.357 03:08:17 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 57806 ']' 00:05:34.357 03:08:17 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 57806 00:05:34.357 03:08:17 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:05:34.357 03:08:17 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:34.357 03:08:17 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57806 00:05:34.357 03:08:17 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:34.357 03:08:17 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:34.357 killing process with pid 57806 00:05:34.357 03:08:17 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57806' 00:05:34.357 03:08:17 alias_rpc -- common/autotest_common.sh@969 -- # kill 57806 00:05:34.357 03:08:17 alias_rpc -- common/autotest_common.sh@974 -- # wait 57806 00:05:36.885 00:05:36.885 real 0m4.307s 00:05:36.885 user 0m4.265s 00:05:36.885 sys 0m0.578s 00:05:36.885 03:08:20 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:36.885 03:08:20 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.885 ************************************ 00:05:36.885 END TEST alias_rpc 00:05:36.885 ************************************ 00:05:36.885 03:08:20 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:36.885 03:08:20 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:36.885 03:08:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:36.885 03:08:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:36.885 03:08:20 -- common/autotest_common.sh@10 -- # set +x 00:05:36.885 ************************************ 00:05:36.885 START TEST spdkcli_tcp 00:05:36.885 ************************************ 00:05:36.885 03:08:20 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:05:37.144 * Looking for test storage... 00:05:37.144 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:05:37.144 03:08:20 spdkcli_tcp -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:37.144 03:08:20 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lcov --version 00:05:37.144 03:08:20 spdkcli_tcp -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:37.144 03:08:20 spdkcli_tcp -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:37.144 03:08:20 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.144 03:08:20 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.144 03:08:20 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.144 03:08:20 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.144 03:08:20 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.144 03:08:20 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.144 03:08:20 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.144 03:08:20 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.144 03:08:20 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.144 03:08:20 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.144 03:08:20 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.144 03:08:20 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:37.144 03:08:20 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:37.144 03:08:20 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.144 03:08:20 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.144 03:08:20 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:37.144 03:08:20 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:37.144 03:08:20 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.144 03:08:20 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:37.144 03:08:20 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.144 03:08:20 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:37.144 03:08:20 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:37.144 03:08:20 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.144 03:08:20 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:37.144 03:08:20 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.144 03:08:20 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.144 03:08:20 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.144 03:08:20 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:37.144 03:08:20 spdkcli_tcp -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.144 03:08:20 spdkcli_tcp -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:37.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.144 --rc genhtml_branch_coverage=1 00:05:37.144 --rc genhtml_function_coverage=1 00:05:37.144 --rc genhtml_legend=1 00:05:37.144 --rc geninfo_all_blocks=1 00:05:37.144 --rc geninfo_unexecuted_blocks=1 00:05:37.144 00:05:37.144 ' 00:05:37.144 03:08:20 spdkcli_tcp -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:37.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.144 --rc genhtml_branch_coverage=1 00:05:37.144 --rc genhtml_function_coverage=1 00:05:37.144 --rc genhtml_legend=1 00:05:37.144 --rc geninfo_all_blocks=1 00:05:37.144 --rc geninfo_unexecuted_blocks=1 00:05:37.144 00:05:37.144 ' 00:05:37.144 03:08:20 spdkcli_tcp -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:37.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.144 --rc genhtml_branch_coverage=1 00:05:37.144 --rc genhtml_function_coverage=1 00:05:37.144 --rc genhtml_legend=1 00:05:37.144 --rc geninfo_all_blocks=1 00:05:37.144 --rc geninfo_unexecuted_blocks=1 00:05:37.144 00:05:37.144 ' 00:05:37.144 03:08:20 spdkcli_tcp -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:37.144 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.144 --rc genhtml_branch_coverage=1 00:05:37.144 --rc genhtml_function_coverage=1 00:05:37.144 --rc genhtml_legend=1 00:05:37.144 --rc geninfo_all_blocks=1 00:05:37.144 --rc geninfo_unexecuted_blocks=1 00:05:37.144 00:05:37.144 ' 00:05:37.144 03:08:20 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:05:37.144 03:08:20 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:05:37.144 03:08:20 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:05:37.144 03:08:20 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:37.144 03:08:20 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:37.144 03:08:20 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:37.144 03:08:20 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:37.144 03:08:20 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:37.144 03:08:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:37.144 03:08:20 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:37.144 03:08:20 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57913 00:05:37.144 03:08:20 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57913 00:05:37.144 03:08:20 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 57913 ']' 00:05:37.144 03:08:20 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.144 03:08:20 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:37.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.144 03:08:20 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.144 03:08:20 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:37.144 03:08:20 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:37.403 [2024-10-09 03:08:20.452286] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:37.403 [2024-10-09 03:08:20.452393] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57913 ] 00:05:37.403 [2024-10-09 03:08:20.616759] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:37.661 [2024-10-09 03:08:20.829431] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.661 [2024-10-09 03:08:20.829473] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:38.608 03:08:21 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:38.608 03:08:21 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:05:38.608 03:08:21 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57935 00:05:38.608 03:08:21 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:38.608 03:08:21 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:38.867 [ 00:05:38.867 "bdev_malloc_delete", 00:05:38.867 "bdev_malloc_create", 00:05:38.867 "bdev_null_resize", 00:05:38.867 "bdev_null_delete", 00:05:38.867 "bdev_null_create", 00:05:38.867 "bdev_nvme_cuse_unregister", 00:05:38.867 "bdev_nvme_cuse_register", 00:05:38.867 "bdev_opal_new_user", 00:05:38.867 "bdev_opal_set_lock_state", 00:05:38.867 "bdev_opal_delete", 00:05:38.867 "bdev_opal_get_info", 00:05:38.867 "bdev_opal_create", 00:05:38.867 "bdev_nvme_opal_revert", 00:05:38.867 "bdev_nvme_opal_init", 00:05:38.867 "bdev_nvme_send_cmd", 00:05:38.867 "bdev_nvme_set_keys", 00:05:38.867 "bdev_nvme_get_path_iostat", 00:05:38.867 "bdev_nvme_get_mdns_discovery_info", 00:05:38.867 "bdev_nvme_stop_mdns_discovery", 00:05:38.867 "bdev_nvme_start_mdns_discovery", 00:05:38.867 "bdev_nvme_set_multipath_policy", 00:05:38.867 "bdev_nvme_set_preferred_path", 00:05:38.867 "bdev_nvme_get_io_paths", 00:05:38.867 "bdev_nvme_remove_error_injection", 00:05:38.867 "bdev_nvme_add_error_injection", 00:05:38.867 "bdev_nvme_get_discovery_info", 00:05:38.867 "bdev_nvme_stop_discovery", 00:05:38.867 "bdev_nvme_start_discovery", 00:05:38.867 "bdev_nvme_get_controller_health_info", 00:05:38.867 "bdev_nvme_disable_controller", 00:05:38.867 "bdev_nvme_enable_controller", 00:05:38.867 "bdev_nvme_reset_controller", 00:05:38.867 "bdev_nvme_get_transport_statistics", 00:05:38.867 "bdev_nvme_apply_firmware", 00:05:38.867 "bdev_nvme_detach_controller", 00:05:38.867 "bdev_nvme_get_controllers", 00:05:38.867 "bdev_nvme_attach_controller", 00:05:38.867 "bdev_nvme_set_hotplug", 00:05:38.867 "bdev_nvme_set_options", 00:05:38.867 "bdev_passthru_delete", 00:05:38.867 "bdev_passthru_create", 00:05:38.867 "bdev_lvol_set_parent_bdev", 00:05:38.867 "bdev_lvol_set_parent", 00:05:38.867 "bdev_lvol_check_shallow_copy", 00:05:38.867 "bdev_lvol_start_shallow_copy", 00:05:38.867 "bdev_lvol_grow_lvstore", 00:05:38.867 "bdev_lvol_get_lvols", 00:05:38.867 "bdev_lvol_get_lvstores", 00:05:38.867 "bdev_lvol_delete", 00:05:38.867 "bdev_lvol_set_read_only", 00:05:38.867 "bdev_lvol_resize", 00:05:38.867 "bdev_lvol_decouple_parent", 00:05:38.867 "bdev_lvol_inflate", 00:05:38.867 "bdev_lvol_rename", 00:05:38.867 "bdev_lvol_clone_bdev", 00:05:38.867 "bdev_lvol_clone", 00:05:38.867 "bdev_lvol_snapshot", 00:05:38.867 "bdev_lvol_create", 00:05:38.867 "bdev_lvol_delete_lvstore", 00:05:38.867 "bdev_lvol_rename_lvstore", 00:05:38.867 "bdev_lvol_create_lvstore", 00:05:38.867 "bdev_raid_set_options", 00:05:38.867 "bdev_raid_remove_base_bdev", 00:05:38.867 "bdev_raid_add_base_bdev", 00:05:38.867 "bdev_raid_delete", 00:05:38.867 "bdev_raid_create", 00:05:38.867 "bdev_raid_get_bdevs", 00:05:38.867 "bdev_error_inject_error", 00:05:38.867 "bdev_error_delete", 00:05:38.867 "bdev_error_create", 00:05:38.867 "bdev_split_delete", 00:05:38.867 "bdev_split_create", 00:05:38.867 "bdev_delay_delete", 00:05:38.867 "bdev_delay_create", 00:05:38.867 "bdev_delay_update_latency", 00:05:38.867 "bdev_zone_block_delete", 00:05:38.867 "bdev_zone_block_create", 00:05:38.867 "blobfs_create", 00:05:38.867 "blobfs_detect", 00:05:38.867 "blobfs_set_cache_size", 00:05:38.867 "bdev_aio_delete", 00:05:38.867 "bdev_aio_rescan", 00:05:38.867 "bdev_aio_create", 00:05:38.867 "bdev_ftl_set_property", 00:05:38.867 "bdev_ftl_get_properties", 00:05:38.867 "bdev_ftl_get_stats", 00:05:38.867 "bdev_ftl_unmap", 00:05:38.867 "bdev_ftl_unload", 00:05:38.867 "bdev_ftl_delete", 00:05:38.867 "bdev_ftl_load", 00:05:38.867 "bdev_ftl_create", 00:05:38.867 "bdev_virtio_attach_controller", 00:05:38.867 "bdev_virtio_scsi_get_devices", 00:05:38.867 "bdev_virtio_detach_controller", 00:05:38.867 "bdev_virtio_blk_set_hotplug", 00:05:38.867 "bdev_iscsi_delete", 00:05:38.867 "bdev_iscsi_create", 00:05:38.867 "bdev_iscsi_set_options", 00:05:38.867 "accel_error_inject_error", 00:05:38.867 "ioat_scan_accel_module", 00:05:38.867 "dsa_scan_accel_module", 00:05:38.867 "iaa_scan_accel_module", 00:05:38.867 "keyring_file_remove_key", 00:05:38.867 "keyring_file_add_key", 00:05:38.867 "keyring_linux_set_options", 00:05:38.867 "fsdev_aio_delete", 00:05:38.867 "fsdev_aio_create", 00:05:38.867 "iscsi_get_histogram", 00:05:38.867 "iscsi_enable_histogram", 00:05:38.867 "iscsi_set_options", 00:05:38.867 "iscsi_get_auth_groups", 00:05:38.867 "iscsi_auth_group_remove_secret", 00:05:38.867 "iscsi_auth_group_add_secret", 00:05:38.867 "iscsi_delete_auth_group", 00:05:38.867 "iscsi_create_auth_group", 00:05:38.867 "iscsi_set_discovery_auth", 00:05:38.867 "iscsi_get_options", 00:05:38.867 "iscsi_target_node_request_logout", 00:05:38.867 "iscsi_target_node_set_redirect", 00:05:38.867 "iscsi_target_node_set_auth", 00:05:38.867 "iscsi_target_node_add_lun", 00:05:38.867 "iscsi_get_stats", 00:05:38.867 "iscsi_get_connections", 00:05:38.867 "iscsi_portal_group_set_auth", 00:05:38.867 "iscsi_start_portal_group", 00:05:38.867 "iscsi_delete_portal_group", 00:05:38.867 "iscsi_create_portal_group", 00:05:38.867 "iscsi_get_portal_groups", 00:05:38.867 "iscsi_delete_target_node", 00:05:38.867 "iscsi_target_node_remove_pg_ig_maps", 00:05:38.867 "iscsi_target_node_add_pg_ig_maps", 00:05:38.867 "iscsi_create_target_node", 00:05:38.867 "iscsi_get_target_nodes", 00:05:38.867 "iscsi_delete_initiator_group", 00:05:38.867 "iscsi_initiator_group_remove_initiators", 00:05:38.867 "iscsi_initiator_group_add_initiators", 00:05:38.867 "iscsi_create_initiator_group", 00:05:38.867 "iscsi_get_initiator_groups", 00:05:38.867 "nvmf_set_crdt", 00:05:38.867 "nvmf_set_config", 00:05:38.867 "nvmf_set_max_subsystems", 00:05:38.867 "nvmf_stop_mdns_prr", 00:05:38.867 "nvmf_publish_mdns_prr", 00:05:38.867 "nvmf_subsystem_get_listeners", 00:05:38.867 "nvmf_subsystem_get_qpairs", 00:05:38.867 "nvmf_subsystem_get_controllers", 00:05:38.867 "nvmf_get_stats", 00:05:38.867 "nvmf_get_transports", 00:05:38.867 "nvmf_create_transport", 00:05:38.867 "nvmf_get_targets", 00:05:38.867 "nvmf_delete_target", 00:05:38.867 "nvmf_create_target", 00:05:38.867 "nvmf_subsystem_allow_any_host", 00:05:38.867 "nvmf_subsystem_set_keys", 00:05:38.867 "nvmf_subsystem_remove_host", 00:05:38.867 "nvmf_subsystem_add_host", 00:05:38.867 "nvmf_ns_remove_host", 00:05:38.867 "nvmf_ns_add_host", 00:05:38.867 "nvmf_subsystem_remove_ns", 00:05:38.867 "nvmf_subsystem_set_ns_ana_group", 00:05:38.867 "nvmf_subsystem_add_ns", 00:05:38.867 "nvmf_subsystem_listener_set_ana_state", 00:05:38.867 "nvmf_discovery_get_referrals", 00:05:38.867 "nvmf_discovery_remove_referral", 00:05:38.867 "nvmf_discovery_add_referral", 00:05:38.867 "nvmf_subsystem_remove_listener", 00:05:38.867 "nvmf_subsystem_add_listener", 00:05:38.867 "nvmf_delete_subsystem", 00:05:38.867 "nvmf_create_subsystem", 00:05:38.867 "nvmf_get_subsystems", 00:05:38.867 "env_dpdk_get_mem_stats", 00:05:38.867 "nbd_get_disks", 00:05:38.867 "nbd_stop_disk", 00:05:38.867 "nbd_start_disk", 00:05:38.867 "ublk_recover_disk", 00:05:38.867 "ublk_get_disks", 00:05:38.867 "ublk_stop_disk", 00:05:38.867 "ublk_start_disk", 00:05:38.867 "ublk_destroy_target", 00:05:38.867 "ublk_create_target", 00:05:38.867 "virtio_blk_create_transport", 00:05:38.867 "virtio_blk_get_transports", 00:05:38.867 "vhost_controller_set_coalescing", 00:05:38.867 "vhost_get_controllers", 00:05:38.867 "vhost_delete_controller", 00:05:38.867 "vhost_create_blk_controller", 00:05:38.867 "vhost_scsi_controller_remove_target", 00:05:38.867 "vhost_scsi_controller_add_target", 00:05:38.867 "vhost_start_scsi_controller", 00:05:38.867 "vhost_create_scsi_controller", 00:05:38.867 "thread_set_cpumask", 00:05:38.867 "scheduler_set_options", 00:05:38.867 "framework_get_governor", 00:05:38.867 "framework_get_scheduler", 00:05:38.867 "framework_set_scheduler", 00:05:38.867 "framework_get_reactors", 00:05:38.867 "thread_get_io_channels", 00:05:38.867 "thread_get_pollers", 00:05:38.867 "thread_get_stats", 00:05:38.867 "framework_monitor_context_switch", 00:05:38.867 "spdk_kill_instance", 00:05:38.867 "log_enable_timestamps", 00:05:38.868 "log_get_flags", 00:05:38.868 "log_clear_flag", 00:05:38.868 "log_set_flag", 00:05:38.868 "log_get_level", 00:05:38.868 "log_set_level", 00:05:38.868 "log_get_print_level", 00:05:38.868 "log_set_print_level", 00:05:38.868 "framework_enable_cpumask_locks", 00:05:38.868 "framework_disable_cpumask_locks", 00:05:38.868 "framework_wait_init", 00:05:38.868 "framework_start_init", 00:05:38.868 "scsi_get_devices", 00:05:38.868 "bdev_get_histogram", 00:05:38.868 "bdev_enable_histogram", 00:05:38.868 "bdev_set_qos_limit", 00:05:38.868 "bdev_set_qd_sampling_period", 00:05:38.868 "bdev_get_bdevs", 00:05:38.868 "bdev_reset_iostat", 00:05:38.868 "bdev_get_iostat", 00:05:38.868 "bdev_examine", 00:05:38.868 "bdev_wait_for_examine", 00:05:38.868 "bdev_set_options", 00:05:38.868 "accel_get_stats", 00:05:38.868 "accel_set_options", 00:05:38.868 "accel_set_driver", 00:05:38.868 "accel_crypto_key_destroy", 00:05:38.868 "accel_crypto_keys_get", 00:05:38.868 "accel_crypto_key_create", 00:05:38.868 "accel_assign_opc", 00:05:38.868 "accel_get_module_info", 00:05:38.868 "accel_get_opc_assignments", 00:05:38.868 "vmd_rescan", 00:05:38.868 "vmd_remove_device", 00:05:38.868 "vmd_enable", 00:05:38.868 "sock_get_default_impl", 00:05:38.868 "sock_set_default_impl", 00:05:38.868 "sock_impl_set_options", 00:05:38.868 "sock_impl_get_options", 00:05:38.868 "iobuf_get_stats", 00:05:38.868 "iobuf_set_options", 00:05:38.868 "keyring_get_keys", 00:05:38.868 "framework_get_pci_devices", 00:05:38.868 "framework_get_config", 00:05:38.868 "framework_get_subsystems", 00:05:38.868 "fsdev_set_opts", 00:05:38.868 "fsdev_get_opts", 00:05:38.868 "trace_get_info", 00:05:38.868 "trace_get_tpoint_group_mask", 00:05:38.868 "trace_disable_tpoint_group", 00:05:38.868 "trace_enable_tpoint_group", 00:05:38.868 "trace_clear_tpoint_mask", 00:05:38.868 "trace_set_tpoint_mask", 00:05:38.868 "notify_get_notifications", 00:05:38.868 "notify_get_types", 00:05:38.868 "spdk_get_version", 00:05:38.868 "rpc_get_methods" 00:05:38.868 ] 00:05:38.868 03:08:21 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:38.868 03:08:21 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:38.868 03:08:21 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:38.868 03:08:21 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:38.868 03:08:21 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57913 00:05:38.868 03:08:21 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 57913 ']' 00:05:38.868 03:08:21 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 57913 00:05:38.868 03:08:21 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:05:38.868 03:08:21 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:38.868 03:08:21 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 57913 00:05:38.868 03:08:22 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:38.868 03:08:22 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:38.868 killing process with pid 57913 00:05:38.868 03:08:22 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 57913' 00:05:38.868 03:08:22 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 57913 00:05:38.868 03:08:22 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 57913 00:05:41.396 00:05:41.396 real 0m4.424s 00:05:41.396 user 0m7.742s 00:05:41.396 sys 0m0.622s 00:05:41.396 03:08:24 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:41.396 ************************************ 00:05:41.396 END TEST spdkcli_tcp 00:05:41.396 ************************************ 00:05:41.396 03:08:24 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:41.396 03:08:24 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:41.396 03:08:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:41.396 03:08:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:41.396 03:08:24 -- common/autotest_common.sh@10 -- # set +x 00:05:41.396 ************************************ 00:05:41.396 START TEST dpdk_mem_utility 00:05:41.396 ************************************ 00:05:41.396 03:08:24 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:41.655 * Looking for test storage... 00:05:41.655 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:05:41.655 03:08:24 dpdk_mem_utility -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:41.655 03:08:24 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lcov --version 00:05:41.655 03:08:24 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:41.655 03:08:24 dpdk_mem_utility -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:41.655 03:08:24 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.655 03:08:24 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.655 03:08:24 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.655 03:08:24 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.655 03:08:24 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.655 03:08:24 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.655 03:08:24 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.655 03:08:24 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.655 03:08:24 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.655 03:08:24 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.655 03:08:24 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.655 03:08:24 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:41.655 03:08:24 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:41.655 03:08:24 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.655 03:08:24 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.655 03:08:24 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:41.655 03:08:24 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:41.655 03:08:24 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.655 03:08:24 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:41.655 03:08:24 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.655 03:08:24 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:41.655 03:08:24 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:41.655 03:08:24 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.655 03:08:24 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:41.655 03:08:24 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.655 03:08:24 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.655 03:08:24 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.655 03:08:24 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:41.655 03:08:24 dpdk_mem_utility -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.655 03:08:24 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:41.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.655 --rc genhtml_branch_coverage=1 00:05:41.655 --rc genhtml_function_coverage=1 00:05:41.655 --rc genhtml_legend=1 00:05:41.655 --rc geninfo_all_blocks=1 00:05:41.655 --rc geninfo_unexecuted_blocks=1 00:05:41.655 00:05:41.655 ' 00:05:41.655 03:08:24 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:41.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.655 --rc genhtml_branch_coverage=1 00:05:41.655 --rc genhtml_function_coverage=1 00:05:41.655 --rc genhtml_legend=1 00:05:41.655 --rc geninfo_all_blocks=1 00:05:41.655 --rc geninfo_unexecuted_blocks=1 00:05:41.655 00:05:41.655 ' 00:05:41.655 03:08:24 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:41.655 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.655 --rc genhtml_branch_coverage=1 00:05:41.655 --rc genhtml_function_coverage=1 00:05:41.655 --rc genhtml_legend=1 00:05:41.655 --rc geninfo_all_blocks=1 00:05:41.655 --rc geninfo_unexecuted_blocks=1 00:05:41.656 00:05:41.656 ' 00:05:41.656 03:08:24 dpdk_mem_utility -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:41.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.656 --rc genhtml_branch_coverage=1 00:05:41.656 --rc genhtml_function_coverage=1 00:05:41.656 --rc genhtml_legend=1 00:05:41.656 --rc geninfo_all_blocks=1 00:05:41.656 --rc geninfo_unexecuted_blocks=1 00:05:41.656 00:05:41.656 ' 00:05:41.656 03:08:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:41.656 03:08:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:05:41.656 03:08:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=58040 00:05:41.656 03:08:24 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 58040 00:05:41.656 03:08:24 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 58040 ']' 00:05:41.656 03:08:24 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.656 03:08:24 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.656 03:08:24 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.656 03:08:24 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.656 03:08:24 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:41.656 [2024-10-09 03:08:24.934016] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:41.656 [2024-10-09 03:08:24.934132] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58040 ] 00:05:41.915 [2024-10-09 03:08:25.099170] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.178 [2024-10-09 03:08:25.316968] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.126 03:08:26 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:43.126 03:08:26 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:05:43.126 03:08:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:43.126 03:08:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:43.126 03:08:26 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:43.126 03:08:26 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:43.126 { 00:05:43.126 "filename": "/tmp/spdk_mem_dump.txt" 00:05:43.126 } 00:05:43.126 03:08:26 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:43.126 03:08:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:05:43.126 DPDK memory size 866.000000 MiB in 1 heap(s) 00:05:43.126 1 heaps totaling size 866.000000 MiB 00:05:43.126 size: 866.000000 MiB heap id: 0 00:05:43.126 end heaps---------- 00:05:43.126 9 mempools totaling size 642.649841 MiB 00:05:43.126 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:43.126 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:43.126 size: 92.545471 MiB name: bdev_io_58040 00:05:43.126 size: 51.011292 MiB name: evtpool_58040 00:05:43.126 size: 50.003479 MiB name: msgpool_58040 00:05:43.126 size: 36.509338 MiB name: fsdev_io_58040 00:05:43.126 size: 21.763794 MiB name: PDU_Pool 00:05:43.126 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:43.126 size: 0.026123 MiB name: Session_Pool 00:05:43.126 end mempools------- 00:05:43.126 6 memzones totaling size 4.142822 MiB 00:05:43.126 size: 1.000366 MiB name: RG_ring_0_58040 00:05:43.126 size: 1.000366 MiB name: RG_ring_1_58040 00:05:43.126 size: 1.000366 MiB name: RG_ring_4_58040 00:05:43.126 size: 1.000366 MiB name: RG_ring_5_58040 00:05:43.126 size: 0.125366 MiB name: RG_ring_2_58040 00:05:43.126 size: 0.015991 MiB name: RG_ring_3_58040 00:05:43.126 end memzones------- 00:05:43.126 03:08:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:05:43.126 heap id: 0 total size: 866.000000 MiB number of busy elements: 313 number of free elements: 19 00:05:43.126 list of free elements. size: 19.914062 MiB 00:05:43.126 element at address: 0x200000400000 with size: 1.999451 MiB 00:05:43.126 element at address: 0x200000800000 with size: 1.996887 MiB 00:05:43.126 element at address: 0x200009600000 with size: 1.995972 MiB 00:05:43.126 element at address: 0x20000d800000 with size: 1.995972 MiB 00:05:43.126 element at address: 0x200007000000 with size: 1.991028 MiB 00:05:43.126 element at address: 0x20001bf00040 with size: 0.999939 MiB 00:05:43.126 element at address: 0x20001c300040 with size: 0.999939 MiB 00:05:43.126 element at address: 0x20001c400000 with size: 0.999084 MiB 00:05:43.126 element at address: 0x200035000000 with size: 0.994324 MiB 00:05:43.126 element at address: 0x20001bc00000 with size: 0.959656 MiB 00:05:43.126 element at address: 0x20001c700040 with size: 0.936401 MiB 00:05:43.126 element at address: 0x200000200000 with size: 0.831909 MiB 00:05:43.126 element at address: 0x20001de00000 with size: 0.562195 MiB 00:05:43.126 element at address: 0x200003e00000 with size: 0.490173 MiB 00:05:43.126 element at address: 0x20001c000000 with size: 0.488708 MiB 00:05:43.126 element at address: 0x20001c800000 with size: 0.485413 MiB 00:05:43.126 element at address: 0x200015e00000 with size: 0.443481 MiB 00:05:43.126 element at address: 0x20002b200000 with size: 0.390442 MiB 00:05:43.126 element at address: 0x200003a00000 with size: 0.353088 MiB 00:05:43.126 list of standard malloc elements. size: 199.287231 MiB 00:05:43.126 element at address: 0x20000d9fef80 with size: 132.000183 MiB 00:05:43.126 element at address: 0x2000097fef80 with size: 64.000183 MiB 00:05:43.126 element at address: 0x20001bdfff80 with size: 1.000183 MiB 00:05:43.126 element at address: 0x20001c1fff80 with size: 1.000183 MiB 00:05:43.126 element at address: 0x20001c5fff80 with size: 1.000183 MiB 00:05:43.126 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:05:43.126 element at address: 0x20001c7eff40 with size: 0.062683 MiB 00:05:43.126 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:05:43.126 element at address: 0x20000d7ff040 with size: 0.000427 MiB 00:05:43.126 element at address: 0x20001c7efdc0 with size: 0.000366 MiB 00:05:43.126 element at address: 0x200015dff040 with size: 0.000305 MiB 00:05:43.126 element at address: 0x2000002d4f80 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d5080 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d5180 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d5280 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d5380 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d5480 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d5580 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d5680 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d5780 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d5880 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d5980 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d5a80 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d5b80 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d5c80 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d5d80 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d5e80 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d5f80 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d6200 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d6300 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d6400 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d6500 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d6600 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d6700 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d6800 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d6900 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d6a00 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d6b00 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d6c00 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d6d00 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d6e00 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d6f00 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d7000 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d7100 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d7200 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d7300 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d7400 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d7500 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d7600 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d7700 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d7800 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d7900 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d7a00 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:05:43.126 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:05:43.126 element at address: 0x200003a7eac0 with size: 0.000244 MiB 00:05:43.126 element at address: 0x200003a7ebc0 with size: 0.000244 MiB 00:05:43.126 element at address: 0x200003a7ecc0 with size: 0.000244 MiB 00:05:43.126 element at address: 0x200003a7edc0 with size: 0.000244 MiB 00:05:43.126 element at address: 0x200003a7eec0 with size: 0.000244 MiB 00:05:43.126 element at address: 0x200003a7efc0 with size: 0.000244 MiB 00:05:43.126 element at address: 0x200003a7f0c0 with size: 0.000244 MiB 00:05:43.126 element at address: 0x200003a7f1c0 with size: 0.000244 MiB 00:05:43.126 element at address: 0x200003a7f2c0 with size: 0.000244 MiB 00:05:43.126 element at address: 0x200003a7f3c0 with size: 0.000244 MiB 00:05:43.126 element at address: 0x200003a7f4c0 with size: 0.000244 MiB 00:05:43.126 element at address: 0x200003aff800 with size: 0.000244 MiB 00:05:43.126 element at address: 0x200003affa80 with size: 0.000244 MiB 00:05:43.126 element at address: 0x200003e7d7c0 with size: 0.000244 MiB 00:05:43.126 element at address: 0x200003e7d8c0 with size: 0.000244 MiB 00:05:43.126 element at address: 0x200003e7d9c0 with size: 0.000244 MiB 00:05:43.126 element at address: 0x200003e7dac0 with size: 0.000244 MiB 00:05:43.126 element at address: 0x200003e7dbc0 with size: 0.000244 MiB 00:05:43.126 element at address: 0x200003e7dcc0 with size: 0.000244 MiB 00:05:43.126 element at address: 0x200003e7ddc0 with size: 0.000244 MiB 00:05:43.126 element at address: 0x200003e7dec0 with size: 0.000244 MiB 00:05:43.126 element at address: 0x200003e7dfc0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200003e7e0c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200003e7e1c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200003e7e2c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200003e7e3c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200003e7e4c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200003e7e5c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200003e7e6c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200003e7e7c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200003e7e8c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200003e7e9c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200003e7eac0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200003e7ebc0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200003efef00 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200003eff000 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20000d7ff200 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20000d7ff300 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20000d7ff400 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20000d7ff500 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20000d7ff600 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20000d7ff700 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20000d7ff800 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20000d7ff900 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20000d7ffa00 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20000d7ffb00 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20000d7ffc00 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20000d7ffd00 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20000d7ffe00 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20000d7fff00 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200015dff180 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200015dff280 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200015dff380 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200015dff480 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200015dff580 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200015dff680 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200015dff780 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200015dff880 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200015dff980 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200015dffa80 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200015dffb80 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200015dffc80 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200015dfff00 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200015e71880 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200015e71980 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200015e71a80 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200015e71b80 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200015e71c80 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200015e71d80 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200015e71e80 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200015e71f80 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200015e72080 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200015e72180 with size: 0.000244 MiB 00:05:43.127 element at address: 0x200015ef24c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001bcfdd00 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001c07d1c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001c07d2c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001c07d3c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001c07d4c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001c07d5c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001c07d6c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001c07d7c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001c07d8c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001c07d9c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001c0fdd00 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001c4ffc40 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001c7efbc0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001c7efcc0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001c8bc680 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de8fec0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de8ffc0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de900c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de901c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de902c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de903c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de904c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de905c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de906c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de907c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de908c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de909c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de90ac0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de90bc0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de90cc0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de90dc0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de90ec0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de90fc0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de910c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de911c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de912c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de913c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de914c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de915c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de916c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de917c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de918c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de919c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de91ac0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de91bc0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de91cc0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de91dc0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de91ec0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de91fc0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de920c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de921c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de922c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de923c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de924c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de925c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de926c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de927c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de928c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de929c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de92ac0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de92bc0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de92cc0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de92dc0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de92ec0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de92fc0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de930c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de931c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de932c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de933c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de934c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de935c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de936c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de937c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de938c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de939c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de93ac0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de93bc0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de93cc0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de93dc0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de93ec0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de93fc0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de940c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de941c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de942c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de943c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de944c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de945c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de946c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de947c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de948c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de949c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de94ac0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de94bc0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de94cc0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de94dc0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de94ec0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de94fc0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de950c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de951c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de952c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20001de953c0 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20002b263f40 with size: 0.000244 MiB 00:05:43.127 element at address: 0x20002b264040 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26ad00 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26af80 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26b080 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26b180 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26b280 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26b380 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26b480 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26b580 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26b680 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26b780 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26b880 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26b980 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26ba80 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26bb80 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26bc80 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26bd80 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26be80 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26bf80 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26c080 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26c180 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26c280 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26c380 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26c480 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26c580 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26c680 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26c780 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26c880 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26c980 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26ca80 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26cb80 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26cc80 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26cd80 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26ce80 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26cf80 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26d080 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26d180 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26d280 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26d380 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26d480 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26d580 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26d680 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26d780 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26d880 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26d980 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26da80 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26db80 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26dc80 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26dd80 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26de80 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26df80 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26e080 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26e180 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26e280 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26e380 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26e480 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26e580 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26e680 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26e780 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26e880 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26e980 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26ea80 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26eb80 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26ec80 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26ed80 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26ee80 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26ef80 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26f080 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26f180 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26f280 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26f380 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26f480 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26f580 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26f680 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26f780 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26f880 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26f980 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26fa80 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26fb80 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26fc80 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26fd80 with size: 0.000244 MiB 00:05:43.128 element at address: 0x20002b26fe80 with size: 0.000244 MiB 00:05:43.128 list of memzone associated elements. size: 646.798706 MiB 00:05:43.128 element at address: 0x20001de954c0 with size: 211.416809 MiB 00:05:43.128 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:43.128 element at address: 0x20002b26ff80 with size: 157.562622 MiB 00:05:43.128 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:43.128 element at address: 0x200015ff4740 with size: 92.045105 MiB 00:05:43.128 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_58040_0 00:05:43.128 element at address: 0x2000009ff340 with size: 48.003113 MiB 00:05:43.128 associated memzone info: size: 48.002930 MiB name: MP_evtpool_58040_0 00:05:43.128 element at address: 0x200003fff340 with size: 48.003113 MiB 00:05:43.128 associated memzone info: size: 48.002930 MiB name: MP_msgpool_58040_0 00:05:43.128 element at address: 0x2000071fdb40 with size: 36.008972 MiB 00:05:43.128 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_58040_0 00:05:43.128 element at address: 0x20001c9be900 with size: 20.255615 MiB 00:05:43.128 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:43.128 element at address: 0x2000351feb00 with size: 18.005127 MiB 00:05:43.128 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:43.128 element at address: 0x2000005ffdc0 with size: 2.000549 MiB 00:05:43.128 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_58040 00:05:43.128 element at address: 0x200003bffdc0 with size: 2.000549 MiB 00:05:43.128 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_58040 00:05:43.128 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:05:43.128 associated memzone info: size: 1.007996 MiB name: MP_evtpool_58040 00:05:43.128 element at address: 0x20001c0fde00 with size: 1.008179 MiB 00:05:43.128 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:43.128 element at address: 0x20001c8bc780 with size: 1.008179 MiB 00:05:43.128 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:43.128 element at address: 0x20001bcfde00 with size: 1.008179 MiB 00:05:43.128 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:43.128 element at address: 0x200015ef25c0 with size: 1.008179 MiB 00:05:43.128 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:43.128 element at address: 0x200003eff100 with size: 1.000549 MiB 00:05:43.128 associated memzone info: size: 1.000366 MiB name: RG_ring_0_58040 00:05:43.128 element at address: 0x200003affb80 with size: 1.000549 MiB 00:05:43.128 associated memzone info: size: 1.000366 MiB name: RG_ring_1_58040 00:05:43.128 element at address: 0x20001c4ffd40 with size: 1.000549 MiB 00:05:43.128 associated memzone info: size: 1.000366 MiB name: RG_ring_4_58040 00:05:43.128 element at address: 0x2000350fe8c0 with size: 1.000549 MiB 00:05:43.128 associated memzone info: size: 1.000366 MiB name: RG_ring_5_58040 00:05:43.128 element at address: 0x200003a7f5c0 with size: 0.500549 MiB 00:05:43.128 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_58040 00:05:43.128 element at address: 0x200003e7ecc0 with size: 0.500549 MiB 00:05:43.128 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_58040 00:05:43.128 element at address: 0x20001c07dac0 with size: 0.500549 MiB 00:05:43.128 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:43.128 element at address: 0x200015e72280 with size: 0.500549 MiB 00:05:43.128 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:43.128 element at address: 0x20001c87c440 with size: 0.250549 MiB 00:05:43.128 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:43.128 element at address: 0x200003a5e880 with size: 0.125549 MiB 00:05:43.128 associated memzone info: size: 0.125366 MiB name: RG_ring_2_58040 00:05:43.128 element at address: 0x20001bcf5ac0 with size: 0.031799 MiB 00:05:43.128 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:43.128 element at address: 0x20002b264140 with size: 0.023804 MiB 00:05:43.128 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:43.128 element at address: 0x200003a5a640 with size: 0.016174 MiB 00:05:43.128 associated memzone info: size: 0.015991 MiB name: RG_ring_3_58040 00:05:43.128 element at address: 0x20002b26a2c0 with size: 0.002502 MiB 00:05:43.128 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:43.128 element at address: 0x2000002d6080 with size: 0.000366 MiB 00:05:43.128 associated memzone info: size: 0.000183 MiB name: MP_msgpool_58040 00:05:43.128 element at address: 0x200003aff900 with size: 0.000366 MiB 00:05:43.128 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_58040 00:05:43.128 element at address: 0x200015dffd80 with size: 0.000366 MiB 00:05:43.128 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_58040 00:05:43.128 element at address: 0x20002b26ae00 with size: 0.000366 MiB 00:05:43.128 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:43.128 03:08:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:43.128 03:08:26 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 58040 00:05:43.129 03:08:26 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 58040 ']' 00:05:43.129 03:08:26 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 58040 00:05:43.129 03:08:26 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:05:43.129 03:08:26 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:43.129 03:08:26 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58040 00:05:43.129 03:08:26 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:43.129 03:08:26 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:43.129 killing process with pid 58040 00:05:43.129 03:08:26 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58040' 00:05:43.129 03:08:26 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 58040 00:05:43.129 03:08:26 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 58040 00:05:45.661 00:05:45.661 real 0m4.199s 00:05:45.661 user 0m4.086s 00:05:45.661 sys 0m0.550s 00:05:45.661 03:08:28 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:45.661 03:08:28 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:45.661 ************************************ 00:05:45.661 END TEST dpdk_mem_utility 00:05:45.661 ************************************ 00:05:45.661 03:08:28 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:45.661 03:08:28 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:45.661 03:08:28 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.661 03:08:28 -- common/autotest_common.sh@10 -- # set +x 00:05:45.661 ************************************ 00:05:45.661 START TEST event 00:05:45.661 ************************************ 00:05:45.661 03:08:28 event -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:05:45.919 * Looking for test storage... 00:05:45.919 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:45.919 03:08:29 event -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:45.919 03:08:29 event -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:45.919 03:08:29 event -- common/autotest_common.sh@1681 -- # lcov --version 00:05:45.919 03:08:29 event -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:45.919 03:08:29 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.919 03:08:29 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.919 03:08:29 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.919 03:08:29 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.919 03:08:29 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.920 03:08:29 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.920 03:08:29 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.920 03:08:29 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.920 03:08:29 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.920 03:08:29 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.920 03:08:29 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.920 03:08:29 event -- scripts/common.sh@344 -- # case "$op" in 00:05:45.920 03:08:29 event -- scripts/common.sh@345 -- # : 1 00:05:45.920 03:08:29 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.920 03:08:29 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.920 03:08:29 event -- scripts/common.sh@365 -- # decimal 1 00:05:45.920 03:08:29 event -- scripts/common.sh@353 -- # local d=1 00:05:45.920 03:08:29 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.920 03:08:29 event -- scripts/common.sh@355 -- # echo 1 00:05:45.920 03:08:29 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.920 03:08:29 event -- scripts/common.sh@366 -- # decimal 2 00:05:45.920 03:08:29 event -- scripts/common.sh@353 -- # local d=2 00:05:45.920 03:08:29 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.920 03:08:29 event -- scripts/common.sh@355 -- # echo 2 00:05:45.920 03:08:29 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.920 03:08:29 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.920 03:08:29 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.920 03:08:29 event -- scripts/common.sh@368 -- # return 0 00:05:45.920 03:08:29 event -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.920 03:08:29 event -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:45.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.920 --rc genhtml_branch_coverage=1 00:05:45.920 --rc genhtml_function_coverage=1 00:05:45.920 --rc genhtml_legend=1 00:05:45.920 --rc geninfo_all_blocks=1 00:05:45.920 --rc geninfo_unexecuted_blocks=1 00:05:45.920 00:05:45.920 ' 00:05:45.920 03:08:29 event -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:45.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.920 --rc genhtml_branch_coverage=1 00:05:45.920 --rc genhtml_function_coverage=1 00:05:45.920 --rc genhtml_legend=1 00:05:45.920 --rc geninfo_all_blocks=1 00:05:45.920 --rc geninfo_unexecuted_blocks=1 00:05:45.920 00:05:45.920 ' 00:05:45.920 03:08:29 event -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:45.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.920 --rc genhtml_branch_coverage=1 00:05:45.920 --rc genhtml_function_coverage=1 00:05:45.920 --rc genhtml_legend=1 00:05:45.920 --rc geninfo_all_blocks=1 00:05:45.920 --rc geninfo_unexecuted_blocks=1 00:05:45.920 00:05:45.920 ' 00:05:45.920 03:08:29 event -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:45.920 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.920 --rc genhtml_branch_coverage=1 00:05:45.920 --rc genhtml_function_coverage=1 00:05:45.920 --rc genhtml_legend=1 00:05:45.920 --rc geninfo_all_blocks=1 00:05:45.920 --rc geninfo_unexecuted_blocks=1 00:05:45.920 00:05:45.920 ' 00:05:45.920 03:08:29 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:45.920 03:08:29 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:45.920 03:08:29 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:45.920 03:08:29 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:05:45.920 03:08:29 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:45.920 03:08:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.920 ************************************ 00:05:45.920 START TEST event_perf 00:05:45.920 ************************************ 00:05:45.920 03:08:29 event.event_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:45.920 Running I/O for 1 seconds...[2024-10-09 03:08:29.163913] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:45.920 [2024-10-09 03:08:29.164429] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58148 ] 00:05:46.178 [2024-10-09 03:08:29.326897] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:46.436 [2024-10-09 03:08:29.542683] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:46.436 [2024-10-09 03:08:29.542891] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:46.436 [2024-10-09 03:08:29.543033] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:05:46.436 [2024-10-09 03:08:29.543126] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.813 Running I/O for 1 seconds... 00:05:47.813 lcore 0: 198147 00:05:47.813 lcore 1: 198147 00:05:47.813 lcore 2: 198148 00:05:47.813 lcore 3: 198147 00:05:47.813 done. 00:05:47.813 00:05:47.813 real 0m1.832s 00:05:47.813 user 0m4.576s 00:05:47.813 sys 0m0.134s 00:05:47.813 03:08:30 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:47.813 03:08:30 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:47.813 ************************************ 00:05:47.813 END TEST event_perf 00:05:47.813 ************************************ 00:05:47.813 03:08:30 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:47.813 03:08:30 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:47.813 03:08:30 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:47.813 03:08:30 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.813 ************************************ 00:05:47.813 START TEST event_reactor 00:05:47.813 ************************************ 00:05:47.813 03:08:31 event.event_reactor -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:05:47.813 [2024-10-09 03:08:31.064757] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:47.813 [2024-10-09 03:08:31.064917] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58188 ] 00:05:48.072 [2024-10-09 03:08:31.244195] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.330 [2024-10-09 03:08:31.458341] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.705 test_start 00:05:49.705 oneshot 00:05:49.705 tick 100 00:05:49.705 tick 100 00:05:49.705 tick 250 00:05:49.705 tick 100 00:05:49.705 tick 100 00:05:49.705 tick 100 00:05:49.705 tick 250 00:05:49.705 tick 500 00:05:49.705 tick 100 00:05:49.705 tick 100 00:05:49.705 tick 250 00:05:49.705 tick 100 00:05:49.705 tick 100 00:05:49.705 test_end 00:05:49.705 00:05:49.705 real 0m1.837s 00:05:49.705 user 0m1.601s 00:05:49.705 sys 0m0.127s 00:05:49.705 03:08:32 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:49.705 03:08:32 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:49.705 ************************************ 00:05:49.705 END TEST event_reactor 00:05:49.705 ************************************ 00:05:49.705 03:08:32 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:49.705 03:08:32 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:05:49.705 03:08:32 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:49.705 03:08:32 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.705 ************************************ 00:05:49.705 START TEST event_reactor_perf 00:05:49.705 ************************************ 00:05:49.705 03:08:32 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:49.705 [2024-10-09 03:08:32.952299] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:49.705 [2024-10-09 03:08:32.952408] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58230 ] 00:05:49.964 [2024-10-09 03:08:33.116150] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.222 [2024-10-09 03:08:33.335359] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.719 test_start 00:05:51.719 test_end 00:05:51.719 Performance: 388036 events per second 00:05:51.719 00:05:51.719 real 0m1.811s 00:05:51.719 user 0m1.596s 00:05:51.719 sys 0m0.106s 00:05:51.719 03:08:34 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:51.719 03:08:34 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:51.719 ************************************ 00:05:51.719 END TEST event_reactor_perf 00:05:51.719 ************************************ 00:05:51.719 03:08:34 event -- event/event.sh@49 -- # uname -s 00:05:51.719 03:08:34 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:51.719 03:08:34 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:51.719 03:08:34 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:51.719 03:08:34 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:51.719 03:08:34 event -- common/autotest_common.sh@10 -- # set +x 00:05:51.719 ************************************ 00:05:51.719 START TEST event_scheduler 00:05:51.719 ************************************ 00:05:51.719 03:08:34 event.event_scheduler -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:05:51.719 * Looking for test storage... 00:05:51.719 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:05:51.719 03:08:34 event.event_scheduler -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:05:51.719 03:08:34 event.event_scheduler -- common/autotest_common.sh@1681 -- # lcov --version 00:05:51.719 03:08:34 event.event_scheduler -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:05:51.719 03:08:34 event.event_scheduler -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:05:51.719 03:08:34 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:51.719 03:08:34 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:51.719 03:08:34 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:51.719 03:08:34 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:51.719 03:08:34 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:51.719 03:08:34 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:51.719 03:08:34 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:51.719 03:08:34 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:51.719 03:08:34 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:51.719 03:08:34 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:51.719 03:08:34 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:51.719 03:08:34 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:51.719 03:08:34 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:51.719 03:08:34 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:51.719 03:08:34 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:51.719 03:08:34 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:51.719 03:08:34 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:51.719 03:08:34 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:51.719 03:08:34 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:51.719 03:08:35 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:51.719 03:08:35 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:51.719 03:08:35 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:51.719 03:08:35 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:51.719 03:08:35 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:51.719 03:08:35 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:51.719 03:08:35 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:51.719 03:08:35 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:51.719 03:08:35 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:51.719 03:08:35 event.event_scheduler -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:51.719 03:08:35 event.event_scheduler -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:05:51.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.719 --rc genhtml_branch_coverage=1 00:05:51.719 --rc genhtml_function_coverage=1 00:05:51.719 --rc genhtml_legend=1 00:05:51.719 --rc geninfo_all_blocks=1 00:05:51.719 --rc geninfo_unexecuted_blocks=1 00:05:51.719 00:05:51.719 ' 00:05:51.719 03:08:35 event.event_scheduler -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:05:51.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.719 --rc genhtml_branch_coverage=1 00:05:51.719 --rc genhtml_function_coverage=1 00:05:51.719 --rc genhtml_legend=1 00:05:51.719 --rc geninfo_all_blocks=1 00:05:51.719 --rc geninfo_unexecuted_blocks=1 00:05:51.719 00:05:51.719 ' 00:05:51.719 03:08:35 event.event_scheduler -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:05:51.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.719 --rc genhtml_branch_coverage=1 00:05:51.719 --rc genhtml_function_coverage=1 00:05:51.719 --rc genhtml_legend=1 00:05:51.719 --rc geninfo_all_blocks=1 00:05:51.719 --rc geninfo_unexecuted_blocks=1 00:05:51.719 00:05:51.719 ' 00:05:51.719 03:08:35 event.event_scheduler -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:05:51.719 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:51.719 --rc genhtml_branch_coverage=1 00:05:51.719 --rc genhtml_function_coverage=1 00:05:51.719 --rc genhtml_legend=1 00:05:51.719 --rc geninfo_all_blocks=1 00:05:51.719 --rc geninfo_unexecuted_blocks=1 00:05:51.719 00:05:51.719 ' 00:05:51.719 03:08:35 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:51.719 03:08:35 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58306 00:05:51.719 03:08:35 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:51.719 03:08:35 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:51.719 03:08:35 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58306 00:05:51.719 03:08:35 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 58306 ']' 00:05:51.719 03:08:35 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:51.719 03:08:35 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:51.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:51.719 03:08:35 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:51.719 03:08:35 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:51.719 03:08:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:51.978 [2024-10-09 03:08:35.089408] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:51.978 [2024-10-09 03:08:35.089538] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58306 ] 00:05:51.978 [2024-10-09 03:08:35.253324] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:52.236 [2024-10-09 03:08:35.473360] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.236 [2024-10-09 03:08:35.473536] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.236 [2024-10-09 03:08:35.473673] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:05:52.236 [2024-10-09 03:08:35.473722] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:05:52.802 03:08:35 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:52.802 03:08:35 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:05:52.802 03:08:35 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:52.802 03:08:35 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.802 03:08:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:52.802 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:52.802 POWER: Cannot set governor of lcore 0 to userspace 00:05:52.802 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:52.802 POWER: Cannot set governor of lcore 0 to performance 00:05:52.802 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:52.802 POWER: Cannot set governor of lcore 0 to userspace 00:05:52.802 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:05:52.802 POWER: Cannot set governor of lcore 0 to userspace 00:05:52.802 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:05:52.802 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:05:52.802 POWER: Unable to set Power Management Environment for lcore 0 00:05:52.802 [2024-10-09 03:08:35.938256] dpdk_governor.c: 130:_init_core: *ERROR*: Failed to initialize on core0 00:05:52.802 [2024-10-09 03:08:35.938278] dpdk_governor.c: 191:_init: *ERROR*: Failed to initialize on core0 00:05:52.802 [2024-10-09 03:08:35.938290] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:52.802 [2024-10-09 03:08:35.938311] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:52.802 [2024-10-09 03:08:35.938319] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:52.802 [2024-10-09 03:08:35.938329] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:52.802 03:08:35 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:52.802 03:08:35 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:52.802 03:08:35 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:52.802 03:08:35 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:53.065 [2024-10-09 03:08:36.246784] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:53.065 03:08:36 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.065 03:08:36 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:53.065 03:08:36 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:53.065 03:08:36 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:53.065 03:08:36 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:53.065 ************************************ 00:05:53.065 START TEST scheduler_create_thread 00:05:53.065 ************************************ 00:05:53.065 03:08:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:05:53.065 03:08:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:53.065 03:08:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.065 03:08:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.065 2 00:05:53.065 03:08:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.065 03:08:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:53.065 03:08:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.065 03:08:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.065 3 00:05:53.065 03:08:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.065 03:08:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:53.065 03:08:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.065 03:08:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.065 4 00:05:53.065 03:08:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.065 03:08:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:53.065 03:08:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.065 03:08:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.065 5 00:05:53.065 03:08:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.065 03:08:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:53.065 03:08:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.065 03:08:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.065 6 00:05:53.065 03:08:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.065 03:08:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:53.065 03:08:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.065 03:08:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.065 7 00:05:53.065 03:08:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.065 03:08:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:53.065 03:08:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.065 03:08:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.065 8 00:05:53.065 03:08:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.065 03:08:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:53.065 03:08:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.065 03:08:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:53.325 9 00:05:53.325 03:08:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:53.325 03:08:36 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:53.325 03:08:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:53.325 03:08:36 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:54.701 10 00:05:54.701 03:08:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:54.701 03:08:37 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:54.701 03:08:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:54.701 03:08:37 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:55.269 03:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:55.269 03:08:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:55.269 03:08:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:55.269 03:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:55.269 03:08:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.233 03:08:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.233 03:08:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:56.233 03:08:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.233 03:08:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:56.801 03:08:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:56.801 03:08:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:56.801 03:08:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:56.801 03:08:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:56.801 03:08:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.370 ************************************ 00:05:57.370 END TEST scheduler_create_thread 00:05:57.370 ************************************ 00:05:57.370 03:08:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:57.370 00:05:57.370 real 0m4.205s 00:05:57.370 user 0m0.026s 00:05:57.370 sys 0m0.009s 00:05:57.370 03:08:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:57.370 03:08:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:57.370 03:08:40 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:57.370 03:08:40 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58306 00:05:57.370 03:08:40 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 58306 ']' 00:05:57.370 03:08:40 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 58306 00:05:57.370 03:08:40 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:05:57.370 03:08:40 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:57.370 03:08:40 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58306 00:05:57.370 killing process with pid 58306 00:05:57.370 03:08:40 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:57.370 03:08:40 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:57.370 03:08:40 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58306' 00:05:57.370 03:08:40 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 58306 00:05:57.370 03:08:40 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 58306 00:05:57.629 [2024-10-09 03:08:40.746460] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:59.008 ************************************ 00:05:59.008 END TEST event_scheduler 00:05:59.008 ************************************ 00:05:59.008 00:05:59.008 real 0m7.321s 00:05:59.008 user 0m16.494s 00:05:59.009 sys 0m0.496s 00:05:59.009 03:08:42 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:59.009 03:08:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:59.009 03:08:42 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:59.009 03:08:42 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:59.009 03:08:42 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:59.009 03:08:42 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:59.009 03:08:42 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.009 ************************************ 00:05:59.009 START TEST app_repeat 00:05:59.009 ************************************ 00:05:59.009 03:08:42 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:05:59.009 03:08:42 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.009 03:08:42 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.009 03:08:42 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:59.009 03:08:42 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.009 03:08:42 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:59.009 03:08:42 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:59.009 03:08:42 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:59.009 03:08:42 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58434 00:05:59.009 03:08:42 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:59.009 03:08:42 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:59.009 03:08:42 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58434' 00:05:59.009 Process app_repeat pid: 58434 00:05:59.009 spdk_app_start Round 0 00:05:59.009 03:08:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:59.009 03:08:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:59.009 03:08:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58434 /var/tmp/spdk-nbd.sock 00:05:59.009 03:08:42 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58434 ']' 00:05:59.009 03:08:42 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.009 03:08:42 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:59.009 03:08:42 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.009 03:08:42 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:59.009 03:08:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.009 [2024-10-09 03:08:42.238587] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:05:59.009 [2024-10-09 03:08:42.238743] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58434 ] 00:05:59.268 [2024-10-09 03:08:42.389098] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:59.527 [2024-10-09 03:08:42.609845] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.527 [2024-10-09 03:08:42.609937] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:00.094 03:08:43 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:00.094 03:08:43 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:00.094 03:08:43 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.353 Malloc0 00:06:00.353 03:08:43 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:00.613 Malloc1 00:06:00.613 03:08:43 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.613 03:08:43 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.613 03:08:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.613 03:08:43 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:00.613 03:08:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.613 03:08:43 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:00.613 03:08:43 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:00.613 03:08:43 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.613 03:08:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:00.613 03:08:43 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:00.613 03:08:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.613 03:08:43 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:00.613 03:08:43 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:00.613 03:08:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:00.613 03:08:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.613 03:08:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:00.872 /dev/nbd0 00:06:00.872 03:08:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:00.872 03:08:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:00.872 03:08:43 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:00.872 03:08:43 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:00.872 03:08:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:00.872 03:08:43 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:00.872 03:08:43 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:00.872 03:08:43 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:00.872 03:08:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:00.872 03:08:43 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:00.872 03:08:43 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:00.872 1+0 records in 00:06:00.872 1+0 records out 00:06:00.872 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035286 s, 11.6 MB/s 00:06:00.872 03:08:43 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.872 03:08:43 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:00.872 03:08:43 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:00.872 03:08:43 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:00.872 03:08:43 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:00.873 03:08:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.873 03:08:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:00.873 03:08:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:01.132 /dev/nbd1 00:06:01.132 03:08:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:01.132 03:08:44 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:01.132 03:08:44 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:01.132 03:08:44 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:01.132 03:08:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:01.132 03:08:44 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:01.132 03:08:44 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:01.132 03:08:44 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:01.132 03:08:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:01.132 03:08:44 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:01.132 03:08:44 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:01.132 1+0 records in 00:06:01.132 1+0 records out 00:06:01.132 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000259975 s, 15.8 MB/s 00:06:01.132 03:08:44 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.132 03:08:44 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:01.132 03:08:44 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:01.132 03:08:44 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:01.132 03:08:44 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:01.132 03:08:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:01.132 03:08:44 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:01.132 03:08:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.132 03:08:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.132 03:08:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:01.391 { 00:06:01.391 "nbd_device": "/dev/nbd0", 00:06:01.391 "bdev_name": "Malloc0" 00:06:01.391 }, 00:06:01.391 { 00:06:01.391 "nbd_device": "/dev/nbd1", 00:06:01.391 "bdev_name": "Malloc1" 00:06:01.391 } 00:06:01.391 ]' 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:01.391 { 00:06:01.391 "nbd_device": "/dev/nbd0", 00:06:01.391 "bdev_name": "Malloc0" 00:06:01.391 }, 00:06:01.391 { 00:06:01.391 "nbd_device": "/dev/nbd1", 00:06:01.391 "bdev_name": "Malloc1" 00:06:01.391 } 00:06:01.391 ]' 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:01.391 /dev/nbd1' 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:01.391 /dev/nbd1' 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:01.391 256+0 records in 00:06:01.391 256+0 records out 00:06:01.391 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00508133 s, 206 MB/s 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:01.391 256+0 records in 00:06:01.391 256+0 records out 00:06:01.391 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225201 s, 46.6 MB/s 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:01.391 256+0 records in 00:06:01.391 256+0 records out 00:06:01.391 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249698 s, 42.0 MB/s 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.391 03:08:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:01.649 03:08:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:01.649 03:08:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:01.649 03:08:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:01.649 03:08:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.649 03:08:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.649 03:08:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:01.649 03:08:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.649 03:08:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.649 03:08:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.649 03:08:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:01.908 03:08:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:01.908 03:08:45 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:01.908 03:08:45 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:01.908 03:08:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:01.908 03:08:45 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:01.908 03:08:45 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:01.908 03:08:45 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:01.908 03:08:45 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:01.908 03:08:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:01.908 03:08:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.908 03:08:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:02.166 03:08:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:02.166 03:08:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:02.166 03:08:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:02.166 03:08:45 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:02.166 03:08:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:02.166 03:08:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:02.166 03:08:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:02.166 03:08:45 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:02.166 03:08:45 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:02.166 03:08:45 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:02.166 03:08:45 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:02.166 03:08:45 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:02.166 03:08:45 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:02.424 03:08:45 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:03.797 [2024-10-09 03:08:47.030910] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.055 [2024-10-09 03:08:47.237604] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.055 [2024-10-09 03:08:47.237609] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.311 [2024-10-09 03:08:47.432524] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:04.311 [2024-10-09 03:08:47.432603] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:05.685 spdk_app_start Round 1 00:06:05.685 03:08:48 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:05.685 03:08:48 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:05.685 03:08:48 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58434 /var/tmp/spdk-nbd.sock 00:06:05.685 03:08:48 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58434 ']' 00:06:05.685 03:08:48 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:05.685 03:08:48 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:05.685 03:08:48 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:05.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:05.685 03:08:48 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:05.685 03:08:48 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:05.685 03:08:48 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.685 03:08:48 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:05.685 03:08:48 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.944 Malloc0 00:06:06.203 03:08:49 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:06.494 Malloc1 00:06:06.494 03:08:49 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.494 03:08:49 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.494 03:08:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.494 03:08:49 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:06.494 03:08:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.494 03:08:49 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:06.494 03:08:49 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:06.494 03:08:49 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.494 03:08:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:06.494 03:08:49 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:06.494 03:08:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.494 03:08:49 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:06.494 03:08:49 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:06.494 03:08:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:06.494 03:08:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.494 03:08:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:06.494 /dev/nbd0 00:06:06.494 03:08:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:06.494 03:08:49 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:06.494 03:08:49 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:06.494 03:08:49 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:06.494 03:08:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:06.494 03:08:49 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:06.494 03:08:49 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:06.494 03:08:49 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:06.494 03:08:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:06.494 03:08:49 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:06.494 03:08:49 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.780 1+0 records in 00:06:06.780 1+0 records out 00:06:06.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379881 s, 10.8 MB/s 00:06:06.780 03:08:49 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.780 03:08:49 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:06.780 03:08:49 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.780 03:08:49 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:06.780 03:08:49 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:06.780 03:08:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.780 03:08:49 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.780 03:08:49 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:06.780 /dev/nbd1 00:06:06.780 03:08:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:06.780 03:08:50 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:06.780 03:08:50 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:06.780 03:08:50 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:06.780 03:08:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:06.780 03:08:50 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:06.780 03:08:50 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:06.780 03:08:50 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:06.780 03:08:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:06.780 03:08:50 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:06.780 03:08:50 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:06.780 1+0 records in 00:06:06.780 1+0 records out 00:06:06.780 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027086 s, 15.1 MB/s 00:06:06.780 03:08:50 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.780 03:08:50 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:06.780 03:08:50 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:06.780 03:08:50 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:06.780 03:08:50 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:06.780 03:08:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:06.780 03:08:50 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:06.780 03:08:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.780 03:08:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.780 03:08:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:07.039 03:08:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:07.039 { 00:06:07.039 "nbd_device": "/dev/nbd0", 00:06:07.039 "bdev_name": "Malloc0" 00:06:07.039 }, 00:06:07.039 { 00:06:07.039 "nbd_device": "/dev/nbd1", 00:06:07.039 "bdev_name": "Malloc1" 00:06:07.039 } 00:06:07.039 ]' 00:06:07.039 03:08:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:07.039 03:08:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:07.039 { 00:06:07.039 "nbd_device": "/dev/nbd0", 00:06:07.039 "bdev_name": "Malloc0" 00:06:07.039 }, 00:06:07.039 { 00:06:07.039 "nbd_device": "/dev/nbd1", 00:06:07.039 "bdev_name": "Malloc1" 00:06:07.039 } 00:06:07.039 ]' 00:06:07.039 03:08:50 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:07.039 /dev/nbd1' 00:06:07.039 03:08:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:07.039 /dev/nbd1' 00:06:07.039 03:08:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:07.039 03:08:50 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:07.039 03:08:50 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:07.039 03:08:50 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:07.039 03:08:50 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:07.039 03:08:50 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:07.039 03:08:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.039 03:08:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.039 03:08:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:07.039 03:08:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:07.039 03:08:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:07.039 03:08:50 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:07.039 256+0 records in 00:06:07.039 256+0 records out 00:06:07.039 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141745 s, 74.0 MB/s 00:06:07.039 03:08:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.039 03:08:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:07.298 256+0 records in 00:06:07.298 256+0 records out 00:06:07.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0220009 s, 47.7 MB/s 00:06:07.298 03:08:50 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:07.298 03:08:50 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:07.298 256+0 records in 00:06:07.298 256+0 records out 00:06:07.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0252993 s, 41.4 MB/s 00:06:07.298 03:08:50 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:07.298 03:08:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.298 03:08:50 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:07.298 03:08:50 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:07.298 03:08:50 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:07.298 03:08:50 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:07.298 03:08:50 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:07.298 03:08:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.298 03:08:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:07.298 03:08:50 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:07.298 03:08:50 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:07.298 03:08:50 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:07.298 03:08:50 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:07.298 03:08:50 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.298 03:08:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:07.298 03:08:50 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:07.298 03:08:50 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:07.298 03:08:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.298 03:08:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:07.557 03:08:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:07.557 03:08:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:07.557 03:08:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:07.557 03:08:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.557 03:08:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.557 03:08:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:07.557 03:08:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:07.557 03:08:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.557 03:08:50 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:07.557 03:08:50 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:07.816 03:08:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:07.816 03:08:50 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:07.816 03:08:50 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:07.816 03:08:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:07.816 03:08:50 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:07.816 03:08:50 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:07.816 03:08:50 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:07.816 03:08:50 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:07.816 03:08:50 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:07.816 03:08:50 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:07.816 03:08:50 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:08.076 03:08:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:08.076 03:08:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:08.076 03:08:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:08.076 03:08:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:08.076 03:08:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:08.076 03:08:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:08.076 03:08:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:08.076 03:08:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:08.076 03:08:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:08.076 03:08:51 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:08.076 03:08:51 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:08.076 03:08:51 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:08.076 03:08:51 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:08.335 03:08:51 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:09.713 [2024-10-09 03:08:52.929731] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:09.972 [2024-10-09 03:08:53.139687] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.972 [2024-10-09 03:08:53.139715] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.272 [2024-10-09 03:08:53.340022] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:10.272 [2024-10-09 03:08:53.340099] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:11.648 spdk_app_start Round 2 00:06:11.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:11.648 03:08:54 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:11.648 03:08:54 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:11.648 03:08:54 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58434 /var/tmp/spdk-nbd.sock 00:06:11.648 03:08:54 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58434 ']' 00:06:11.648 03:08:54 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:11.648 03:08:54 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.648 03:08:54 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:11.648 03:08:54 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.648 03:08:54 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:11.648 03:08:54 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:11.648 03:08:54 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:11.648 03:08:54 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:11.908 Malloc0 00:06:11.908 03:08:55 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:12.166 Malloc1 00:06:12.166 03:08:55 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.166 03:08:55 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.166 03:08:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.166 03:08:55 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:12.166 03:08:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.166 03:08:55 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:12.166 03:08:55 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:12.166 03:08:55 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.166 03:08:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:12.166 03:08:55 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:12.166 03:08:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.166 03:08:55 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:12.166 03:08:55 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:12.166 03:08:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:12.166 03:08:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.166 03:08:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:12.427 /dev/nbd0 00:06:12.427 03:08:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:12.427 03:08:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:12.427 03:08:55 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:12.427 03:08:55 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:12.427 03:08:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:12.427 03:08:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:12.427 03:08:55 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:12.427 03:08:55 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:12.427 03:08:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:12.427 03:08:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:12.427 03:08:55 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.427 1+0 records in 00:06:12.427 1+0 records out 00:06:12.427 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000335242 s, 12.2 MB/s 00:06:12.427 03:08:55 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.427 03:08:55 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:12.427 03:08:55 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.427 03:08:55 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:12.427 03:08:55 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:12.427 03:08:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.427 03:08:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.427 03:08:55 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:12.686 /dev/nbd1 00:06:12.686 03:08:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:12.686 03:08:55 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:12.686 03:08:55 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:12.686 03:08:55 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:12.686 03:08:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:12.686 03:08:55 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:12.686 03:08:55 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:12.686 03:08:55 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:12.686 03:08:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:12.686 03:08:55 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:12.686 03:08:55 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:12.686 1+0 records in 00:06:12.686 1+0 records out 00:06:12.686 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000188422 s, 21.7 MB/s 00:06:12.686 03:08:55 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.686 03:08:55 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:12.686 03:08:55 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:12.686 03:08:55 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:12.686 03:08:55 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:12.686 03:08:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:12.686 03:08:55 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:12.686 03:08:55 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:12.687 03:08:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.687 03:08:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:12.945 03:08:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:12.945 { 00:06:12.945 "nbd_device": "/dev/nbd0", 00:06:12.945 "bdev_name": "Malloc0" 00:06:12.945 }, 00:06:12.945 { 00:06:12.945 "nbd_device": "/dev/nbd1", 00:06:12.945 "bdev_name": "Malloc1" 00:06:12.945 } 00:06:12.945 ]' 00:06:12.945 03:08:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:12.945 { 00:06:12.945 "nbd_device": "/dev/nbd0", 00:06:12.945 "bdev_name": "Malloc0" 00:06:12.945 }, 00:06:12.945 { 00:06:12.945 "nbd_device": "/dev/nbd1", 00:06:12.945 "bdev_name": "Malloc1" 00:06:12.945 } 00:06:12.945 ]' 00:06:12.945 03:08:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:12.945 03:08:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:12.945 /dev/nbd1' 00:06:12.945 03:08:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:12.945 /dev/nbd1' 00:06:12.945 03:08:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:12.945 03:08:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:12.945 03:08:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:12.946 03:08:56 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:12.946 03:08:56 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:12.946 03:08:56 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:12.946 03:08:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.946 03:08:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.946 03:08:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:12.946 03:08:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:12.946 03:08:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:12.946 03:08:56 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:12.946 256+0 records in 00:06:12.946 256+0 records out 00:06:12.946 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0129996 s, 80.7 MB/s 00:06:12.946 03:08:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.946 03:08:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:12.946 256+0 records in 00:06:12.946 256+0 records out 00:06:12.946 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0185902 s, 56.4 MB/s 00:06:12.946 03:08:56 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:12.946 03:08:56 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:12.946 256+0 records in 00:06:12.946 256+0 records out 00:06:12.946 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0245392 s, 42.7 MB/s 00:06:12.946 03:08:56 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:12.946 03:08:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.946 03:08:56 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:12.946 03:08:56 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:12.946 03:08:56 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:12.946 03:08:56 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:12.946 03:08:56 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:12.946 03:08:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.946 03:08:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:12.946 03:08:56 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:12.946 03:08:56 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:12.946 03:08:56 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:12.946 03:08:56 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:12.946 03:08:56 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:12.946 03:08:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:12.946 03:08:56 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:12.946 03:08:56 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:12.946 03:08:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:12.946 03:08:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:13.204 03:08:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:13.204 03:08:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:13.204 03:08:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:13.204 03:08:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.204 03:08:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.204 03:08:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:13.204 03:08:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:13.204 03:08:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.204 03:08:56 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:13.204 03:08:56 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:13.464 03:08:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:13.464 03:08:56 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:13.464 03:08:56 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:13.464 03:08:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:13.464 03:08:56 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:13.464 03:08:56 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:13.464 03:08:56 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:13.464 03:08:56 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:13.464 03:08:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:13.464 03:08:56 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:13.464 03:08:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:13.723 03:08:56 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:13.723 03:08:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:13.723 03:08:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:13.723 03:08:56 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:13.723 03:08:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:13.723 03:08:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:13.723 03:08:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:13.723 03:08:56 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:13.723 03:08:56 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:13.723 03:08:56 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:13.723 03:08:56 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:13.723 03:08:56 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:13.723 03:08:56 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:14.291 03:08:57 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:15.666 [2024-10-09 03:08:58.668287] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:15.666 [2024-10-09 03:08:58.874163] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.666 [2024-10-09 03:08:58.874164] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.926 [2024-10-09 03:08:59.067885] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:15.926 [2024-10-09 03:08:59.067980] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:17.304 03:09:00 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58434 /var/tmp/spdk-nbd.sock 00:06:17.304 03:09:00 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 58434 ']' 00:06:17.304 03:09:00 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:17.304 03:09:00 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:17.304 03:09:00 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:17.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:17.304 03:09:00 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:17.304 03:09:00 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:17.304 03:09:00 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:17.304 03:09:00 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:17.304 03:09:00 event.app_repeat -- event/event.sh@39 -- # killprocess 58434 00:06:17.304 03:09:00 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 58434 ']' 00:06:17.304 03:09:00 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 58434 00:06:17.304 03:09:00 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:17.304 03:09:00 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:17.304 03:09:00 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58434 00:06:17.563 killing process with pid 58434 00:06:17.563 03:09:00 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:17.563 03:09:00 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:17.563 03:09:00 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58434' 00:06:17.563 03:09:00 event.app_repeat -- common/autotest_common.sh@969 -- # kill 58434 00:06:17.563 03:09:00 event.app_repeat -- common/autotest_common.sh@974 -- # wait 58434 00:06:18.502 spdk_app_start is called in Round 0. 00:06:18.502 Shutdown signal received, stop current app iteration 00:06:18.502 Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 reinitialization... 00:06:18.502 spdk_app_start is called in Round 1. 00:06:18.502 Shutdown signal received, stop current app iteration 00:06:18.502 Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 reinitialization... 00:06:18.502 spdk_app_start is called in Round 2. 00:06:18.502 Shutdown signal received, stop current app iteration 00:06:18.502 Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 reinitialization... 00:06:18.502 spdk_app_start is called in Round 3. 00:06:18.502 Shutdown signal received, stop current app iteration 00:06:18.502 03:09:01 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:18.502 03:09:01 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:18.502 00:06:18.502 real 0m19.605s 00:06:18.502 user 0m41.036s 00:06:18.502 sys 0m2.809s 00:06:18.502 03:09:01 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:18.502 03:09:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:18.502 ************************************ 00:06:18.502 END TEST app_repeat 00:06:18.502 ************************************ 00:06:18.762 03:09:01 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:18.762 03:09:01 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:18.762 03:09:01 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.762 03:09:01 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.762 03:09:01 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.762 ************************************ 00:06:18.762 START TEST cpu_locks 00:06:18.762 ************************************ 00:06:18.762 03:09:01 event.cpu_locks -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:06:18.762 * Looking for test storage... 00:06:18.762 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:18.762 03:09:01 event.cpu_locks -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:06:18.762 03:09:01 event.cpu_locks -- common/autotest_common.sh@1681 -- # lcov --version 00:06:18.762 03:09:01 event.cpu_locks -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:06:18.762 03:09:02 event.cpu_locks -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:06:18.762 03:09:02 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.762 03:09:02 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.762 03:09:02 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.762 03:09:02 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.762 03:09:02 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.762 03:09:02 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.762 03:09:02 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.762 03:09:02 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.762 03:09:02 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.762 03:09:02 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.762 03:09:02 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.762 03:09:02 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:18.762 03:09:02 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:18.762 03:09:02 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.762 03:09:02 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.762 03:09:02 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:18.762 03:09:02 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:18.762 03:09:02 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.762 03:09:02 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:18.762 03:09:02 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.762 03:09:02 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:18.762 03:09:02 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:18.762 03:09:02 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.762 03:09:02 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:18.762 03:09:02 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.762 03:09:02 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.762 03:09:02 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.762 03:09:02 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:18.762 03:09:02 event.cpu_locks -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.762 03:09:02 event.cpu_locks -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:06:18.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.762 --rc genhtml_branch_coverage=1 00:06:18.762 --rc genhtml_function_coverage=1 00:06:18.762 --rc genhtml_legend=1 00:06:18.762 --rc geninfo_all_blocks=1 00:06:18.762 --rc geninfo_unexecuted_blocks=1 00:06:18.762 00:06:18.762 ' 00:06:18.762 03:09:02 event.cpu_locks -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:06:18.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.762 --rc genhtml_branch_coverage=1 00:06:18.762 --rc genhtml_function_coverage=1 00:06:18.762 --rc genhtml_legend=1 00:06:18.762 --rc geninfo_all_blocks=1 00:06:18.762 --rc geninfo_unexecuted_blocks=1 00:06:18.762 00:06:18.762 ' 00:06:18.762 03:09:02 event.cpu_locks -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:06:18.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.762 --rc genhtml_branch_coverage=1 00:06:18.762 --rc genhtml_function_coverage=1 00:06:18.762 --rc genhtml_legend=1 00:06:18.762 --rc geninfo_all_blocks=1 00:06:18.762 --rc geninfo_unexecuted_blocks=1 00:06:18.762 00:06:18.762 ' 00:06:18.762 03:09:02 event.cpu_locks -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:06:18.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.762 --rc genhtml_branch_coverage=1 00:06:18.762 --rc genhtml_function_coverage=1 00:06:18.762 --rc genhtml_legend=1 00:06:18.762 --rc geninfo_all_blocks=1 00:06:18.762 --rc geninfo_unexecuted_blocks=1 00:06:18.762 00:06:18.762 ' 00:06:18.762 03:09:02 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:18.762 03:09:02 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:18.762 03:09:02 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:18.762 03:09:02 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:18.762 03:09:02 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:18.762 03:09:02 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:18.762 03:09:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.022 ************************************ 00:06:19.022 START TEST default_locks 00:06:19.022 ************************************ 00:06:19.022 03:09:02 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:19.022 03:09:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58887 00:06:19.022 03:09:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58887 00:06:19.022 03:09:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:19.022 03:09:02 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58887 ']' 00:06:19.022 03:09:02 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.022 03:09:02 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.022 03:09:02 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.022 03:09:02 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.022 03:09:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.022 [2024-10-09 03:09:02.168906] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:19.022 [2024-10-09 03:09:02.169023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58887 ] 00:06:19.282 [2024-10-09 03:09:02.333080] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.282 [2024-10-09 03:09:02.548150] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.219 03:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:20.219 03:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:20.219 03:09:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58887 00:06:20.219 03:09:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58887 00:06:20.219 03:09:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:20.512 03:09:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58887 00:06:20.512 03:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 58887 ']' 00:06:20.512 03:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 58887 00:06:20.512 03:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:20.512 03:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:20.512 03:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58887 00:06:20.512 03:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:20.512 03:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:20.512 killing process with pid 58887 00:06:20.512 03:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58887' 00:06:20.512 03:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 58887 00:06:20.512 03:09:03 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 58887 00:06:23.081 03:09:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58887 00:06:23.081 03:09:06 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:23.081 03:09:06 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 58887 00:06:23.081 03:09:06 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:23.081 03:09:06 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.081 03:09:06 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:23.081 03:09:06 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:23.081 03:09:06 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 58887 00:06:23.081 03:09:06 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 58887 ']' 00:06:23.081 03:09:06 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.081 03:09:06 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.081 03:09:06 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.081 03:09:06 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.081 03:09:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.081 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (58887) - No such process 00:06:23.081 ERROR: process (pid: 58887) is no longer running 00:06:23.081 03:09:06 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:23.081 03:09:06 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:23.081 03:09:06 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:23.081 03:09:06 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:23.081 03:09:06 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:23.081 03:09:06 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:23.081 03:09:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:23.081 03:09:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:23.081 03:09:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:23.081 03:09:06 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:23.081 00:06:23.081 real 0m4.183s 00:06:23.081 user 0m4.088s 00:06:23.081 sys 0m0.638s 00:06:23.081 03:09:06 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.081 03:09:06 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.081 ************************************ 00:06:23.081 END TEST default_locks 00:06:23.081 ************************************ 00:06:23.081 03:09:06 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:23.081 03:09:06 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.081 03:09:06 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.081 03:09:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.081 ************************************ 00:06:23.081 START TEST default_locks_via_rpc 00:06:23.081 ************************************ 00:06:23.081 03:09:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:23.081 03:09:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58962 00:06:23.081 03:09:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:23.081 03:09:06 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58962 00:06:23.081 03:09:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 58962 ']' 00:06:23.081 03:09:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.081 03:09:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.081 03:09:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.081 03:09:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.081 03:09:06 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:23.340 [2024-10-09 03:09:06.419590] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:23.340 [2024-10-09 03:09:06.419709] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58962 ] 00:06:23.340 [2024-10-09 03:09:06.583148] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.598 [2024-10-09 03:09:06.788571] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.535 03:09:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:24.535 03:09:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:24.535 03:09:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:24.535 03:09:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.535 03:09:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.535 03:09:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.535 03:09:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:24.535 03:09:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:24.535 03:09:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:24.535 03:09:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:24.535 03:09:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:24.535 03:09:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:24.535 03:09:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:24.535 03:09:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:24.535 03:09:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58962 00:06:24.535 03:09:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58962 00:06:24.535 03:09:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:24.794 03:09:07 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58962 00:06:24.794 03:09:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 58962 ']' 00:06:24.794 03:09:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 58962 00:06:24.794 03:09:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:24.794 03:09:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:24.794 03:09:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 58962 00:06:24.794 03:09:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:24.794 03:09:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:24.794 killing process with pid 58962 00:06:24.794 03:09:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 58962' 00:06:24.794 03:09:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 58962 00:06:24.794 03:09:07 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 58962 00:06:27.328 00:06:27.328 real 0m4.128s 00:06:27.328 user 0m4.053s 00:06:27.328 sys 0m0.609s 00:06:27.328 03:09:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:27.328 03:09:10 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.328 ************************************ 00:06:27.329 END TEST default_locks_via_rpc 00:06:27.329 ************************************ 00:06:27.329 03:09:10 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:27.329 03:09:10 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:27.329 03:09:10 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:27.329 03:09:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:27.329 ************************************ 00:06:27.329 START TEST non_locking_app_on_locked_coremask 00:06:27.329 ************************************ 00:06:27.329 03:09:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:27.329 03:09:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=59038 00:06:27.329 03:09:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:27.329 03:09:10 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 59038 /var/tmp/spdk.sock 00:06:27.329 03:09:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59038 ']' 00:06:27.329 03:09:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:27.329 03:09:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:27.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:27.329 03:09:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:27.329 03:09:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:27.329 03:09:10 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:27.329 [2024-10-09 03:09:10.610900] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:27.329 [2024-10-09 03:09:10.611034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59038 ] 00:06:27.588 [2024-10-09 03:09:10.758761] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.846 [2024-10-09 03:09:10.958762] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.784 03:09:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.784 03:09:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:28.784 03:09:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=59054 00:06:28.784 03:09:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:28.784 03:09:11 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 59054 /var/tmp/spdk2.sock 00:06:28.784 03:09:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59054 ']' 00:06:28.784 03:09:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:28.784 03:09:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:28.784 03:09:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:28.784 03:09:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.784 03:09:11 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.784 [2024-10-09 03:09:11.893041] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:28.784 [2024-10-09 03:09:11.893160] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59054 ] 00:06:28.784 [2024-10-09 03:09:12.041429] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:28.784 [2024-10-09 03:09:12.041497] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.352 [2024-10-09 03:09:12.474986] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.280 03:09:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.280 03:09:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:31.280 03:09:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 59038 00:06:31.280 03:09:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59038 00:06:31.280 03:09:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:31.539 03:09:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 59038 00:06:31.539 03:09:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59038 ']' 00:06:31.539 03:09:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59038 00:06:31.539 03:09:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:31.539 03:09:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:31.539 03:09:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59038 00:06:31.539 03:09:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:31.539 03:09:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:31.539 killing process with pid 59038 00:06:31.539 03:09:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59038' 00:06:31.539 03:09:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59038 00:06:31.539 03:09:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59038 00:06:36.822 03:09:19 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 59054 00:06:36.822 03:09:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59054 ']' 00:06:36.822 03:09:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59054 00:06:36.822 03:09:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:36.822 03:09:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:36.822 03:09:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59054 00:06:36.822 03:09:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:36.822 03:09:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:36.822 killing process with pid 59054 00:06:36.822 03:09:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59054' 00:06:36.822 03:09:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59054 00:06:36.822 03:09:19 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59054 00:06:39.357 00:06:39.357 real 0m11.732s 00:06:39.357 user 0m11.960s 00:06:39.357 sys 0m1.171s 00:06:39.357 03:09:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:39.357 03:09:22 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.357 ************************************ 00:06:39.357 END TEST non_locking_app_on_locked_coremask 00:06:39.357 ************************************ 00:06:39.357 03:09:22 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:39.357 03:09:22 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:39.357 03:09:22 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:39.357 03:09:22 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:39.357 ************************************ 00:06:39.357 START TEST locking_app_on_unlocked_coremask 00:06:39.357 ************************************ 00:06:39.357 03:09:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:39.357 03:09:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=59205 00:06:39.357 03:09:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:39.357 03:09:22 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 59205 /var/tmp/spdk.sock 00:06:39.357 03:09:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59205 ']' 00:06:39.357 03:09:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.357 03:09:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:39.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.357 03:09:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.357 03:09:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:39.357 03:09:22 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.357 [2024-10-09 03:09:22.412936] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:39.357 [2024-10-09 03:09:22.413066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59205 ] 00:06:39.357 [2024-10-09 03:09:22.574530] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:39.357 [2024-10-09 03:09:22.574596] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.616 [2024-10-09 03:09:22.782203] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.554 03:09:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:40.554 03:09:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:40.554 03:09:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=59221 00:06:40.554 03:09:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 59221 /var/tmp/spdk2.sock 00:06:40.554 03:09:23 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:40.554 03:09:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59221 ']' 00:06:40.554 03:09:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:40.554 03:09:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:40.555 03:09:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:40.555 03:09:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.555 03:09:23 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:40.555 [2024-10-09 03:09:23.710465] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:40.555 [2024-10-09 03:09:23.710613] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59221 ] 00:06:40.814 [2024-10-09 03:09:23.865937] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.073 [2024-10-09 03:09:24.276991] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.979 03:09:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.979 03:09:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:42.979 03:09:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 59221 00:06:42.979 03:09:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59221 00:06:42.979 03:09:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:43.553 03:09:26 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 59205 00:06:43.553 03:09:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59205 ']' 00:06:43.553 03:09:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59205 00:06:43.553 03:09:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:43.553 03:09:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:43.553 03:09:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59205 00:06:43.553 03:09:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:43.553 03:09:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:43.553 killing process with pid 59205 00:06:43.553 03:09:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59205' 00:06:43.553 03:09:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59205 00:06:43.553 03:09:26 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59205 00:06:48.827 03:09:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 59221 00:06:48.827 03:09:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59221 ']' 00:06:48.827 03:09:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 59221 00:06:48.827 03:09:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:48.827 03:09:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:48.827 03:09:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59221 00:06:48.827 03:09:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:48.827 03:09:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:48.827 killing process with pid 59221 00:06:48.827 03:09:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59221' 00:06:48.827 03:09:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 59221 00:06:48.827 03:09:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 59221 00:06:51.364 00:06:51.364 real 0m11.888s 00:06:51.364 user 0m12.131s 00:06:51.364 sys 0m1.195s 00:06:51.364 03:09:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:51.364 03:09:34 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.364 ************************************ 00:06:51.364 END TEST locking_app_on_unlocked_coremask 00:06:51.364 ************************************ 00:06:51.364 03:09:34 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:51.364 03:09:34 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:51.364 03:09:34 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:51.365 03:09:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.365 ************************************ 00:06:51.365 START TEST locking_app_on_locked_coremask 00:06:51.365 ************************************ 00:06:51.365 03:09:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:51.365 03:09:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59375 00:06:51.365 03:09:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59375 /var/tmp/spdk.sock 00:06:51.365 03:09:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:51.365 03:09:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59375 ']' 00:06:51.365 03:09:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:51.365 03:09:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:51.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:51.365 03:09:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:51.365 03:09:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:51.365 03:09:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:51.365 [2024-10-09 03:09:34.364229] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:51.365 [2024-10-09 03:09:34.364363] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59375 ] 00:06:51.365 [2024-10-09 03:09:34.528944] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.624 [2024-10-09 03:09:34.742716] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:52.610 03:09:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:52.610 03:09:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:52.610 03:09:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59391 00:06:52.610 03:09:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59391 /var/tmp/spdk2.sock 00:06:52.610 03:09:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:52.610 03:09:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:52.610 03:09:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59391 /var/tmp/spdk2.sock 00:06:52.610 03:09:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:52.610 03:09:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.610 03:09:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:52.610 03:09:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:52.610 03:09:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59391 /var/tmp/spdk2.sock 00:06:52.610 03:09:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 59391 ']' 00:06:52.610 03:09:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:52.610 03:09:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:52.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:52.610 03:09:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:52.610 03:09:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:52.610 03:09:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:52.610 [2024-10-09 03:09:35.698257] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:52.610 [2024-10-09 03:09:35.698381] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59391 ] 00:06:52.610 [2024-10-09 03:09:35.858320] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59375 has claimed it. 00:06:52.610 [2024-10-09 03:09:35.858421] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:53.179 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59391) - No such process 00:06:53.179 ERROR: process (pid: 59391) is no longer running 00:06:53.179 03:09:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:53.179 03:09:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:53.179 03:09:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:53.179 03:09:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:53.179 03:09:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:53.179 03:09:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:53.179 03:09:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59375 00:06:53.179 03:09:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59375 00:06:53.179 03:09:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:53.747 03:09:36 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59375 00:06:53.747 03:09:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 59375 ']' 00:06:53.747 03:09:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 59375 00:06:53.747 03:09:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:53.747 03:09:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.747 03:09:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59375 00:06:53.747 killing process with pid 59375 00:06:53.747 03:09:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:53.747 03:09:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:53.747 03:09:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59375' 00:06:53.747 03:09:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 59375 00:06:53.747 03:09:36 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 59375 00:06:56.283 ************************************ 00:06:56.283 END TEST locking_app_on_locked_coremask 00:06:56.283 ************************************ 00:06:56.283 00:06:56.283 real 0m5.035s 00:06:56.283 user 0m5.192s 00:06:56.283 sys 0m0.847s 00:06:56.283 03:09:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:56.283 03:09:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.284 03:09:39 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:56.284 03:09:39 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:56.284 03:09:39 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:56.284 03:09:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:56.284 ************************************ 00:06:56.284 START TEST locking_overlapped_coremask 00:06:56.284 ************************************ 00:06:56.284 03:09:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:06:56.284 03:09:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59466 00:06:56.284 03:09:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:06:56.284 03:09:39 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59466 /var/tmp/spdk.sock 00:06:56.284 03:09:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59466 ']' 00:06:56.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:56.284 03:09:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:56.284 03:09:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:56.284 03:09:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:56.284 03:09:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:56.284 03:09:39 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:56.284 [2024-10-09 03:09:39.458161] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:56.284 [2024-10-09 03:09:39.458389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59466 ] 00:06:56.543 [2024-10-09 03:09:39.623024] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:56.543 [2024-10-09 03:09:39.829193] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.543 [2024-10-09 03:09:39.829326] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.543 [2024-10-09 03:09:39.829379] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:06:57.481 03:09:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:57.481 03:09:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:57.481 03:09:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59484 00:06:57.481 03:09:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59484 /var/tmp/spdk2.sock 00:06:57.481 03:09:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:57.481 03:09:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 59484 /var/tmp/spdk2.sock 00:06:57.481 03:09:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:57.481 03:09:40 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:57.481 03:09:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.481 03:09:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:57.481 03:09:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:57.481 03:09:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 59484 /var/tmp/spdk2.sock 00:06:57.481 03:09:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 59484 ']' 00:06:57.481 03:09:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:57.481 03:09:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:57.481 03:09:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:57.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:57.481 03:09:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:57.481 03:09:40 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:57.481 [2024-10-09 03:09:40.780162] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:06:57.481 [2024-10-09 03:09:40.780386] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59484 ] 00:06:57.740 [2024-10-09 03:09:40.933004] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59466 has claimed it. 00:06:57.740 [2024-10-09 03:09:40.936886] app.c: 910:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:58.310 ERROR: process (pid: 59484) is no longer running 00:06:58.310 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 846: kill: (59484) - No such process 00:06:58.310 03:09:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.310 03:09:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:58.310 03:09:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:58.310 03:09:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:58.310 03:09:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:58.310 03:09:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:58.310 03:09:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:58.310 03:09:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:58.310 03:09:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:58.310 03:09:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:58.310 03:09:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59466 00:06:58.310 03:09:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 59466 ']' 00:06:58.310 03:09:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 59466 00:06:58.310 03:09:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:06:58.310 03:09:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:58.310 03:09:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59466 00:06:58.310 03:09:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:58.310 03:09:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:58.310 killing process with pid 59466 00:06:58.310 03:09:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59466' 00:06:58.311 03:09:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 59466 00:06:58.311 03:09:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 59466 00:07:01.609 00:07:01.609 real 0m4.860s 00:07:01.609 user 0m12.754s 00:07:01.609 sys 0m0.593s 00:07:01.609 03:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:01.609 03:09:44 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.609 ************************************ 00:07:01.609 END TEST locking_overlapped_coremask 00:07:01.609 ************************************ 00:07:01.609 03:09:44 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:01.609 03:09:44 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:01.609 03:09:44 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:01.609 03:09:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:01.609 ************************************ 00:07:01.609 START TEST locking_overlapped_coremask_via_rpc 00:07:01.609 ************************************ 00:07:01.609 03:09:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:01.609 03:09:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59559 00:07:01.609 03:09:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59559 /var/tmp/spdk.sock 00:07:01.609 03:09:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59559 ']' 00:07:01.609 03:09:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.609 03:09:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.609 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.609 03:09:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.609 03:09:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.609 03:09:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:01.609 03:09:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:01.609 [2024-10-09 03:09:44.373828] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:07:01.609 [2024-10-09 03:09:44.373958] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59559 ] 00:07:01.609 [2024-10-09 03:09:44.538160] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:01.609 [2024-10-09 03:09:44.538227] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:01.609 [2024-10-09 03:09:44.795939] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.609 [2024-10-09 03:09:44.796066] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.609 [2024-10-09 03:09:44.796104] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.548 03:09:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.548 03:09:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:02.548 03:09:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59577 00:07:02.548 03:09:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59577 /var/tmp/spdk2.sock 00:07:02.548 03:09:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59577 ']' 00:07:02.548 03:09:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:02.548 03:09:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.548 03:09:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:02.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:02.548 03:09:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.548 03:09:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.548 03:09:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:02.808 [2024-10-09 03:09:45.908198] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:07:02.808 [2024-10-09 03:09:45.908332] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59577 ] 00:07:02.808 [2024-10-09 03:09:46.060481] app.c: 914:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:02.808 [2024-10-09 03:09:46.060536] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:03.377 [2024-10-09 03:09:46.493518] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 3 00:07:03.377 [2024-10-09 03:09:46.493650] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.377 [2024-10-09 03:09:46.493704] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 4 00:07:05.286 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.286 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:05.286 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:05.286 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.286 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.286 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:05.286 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:05.286 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:05.286 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:05.286 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:05.286 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.286 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:05.286 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:05.286 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:05.286 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:05.286 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.286 [2024-10-09 03:09:48.514077] app.c: 779:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59559 has claimed it. 00:07:05.286 request: 00:07:05.286 { 00:07:05.286 "method": "framework_enable_cpumask_locks", 00:07:05.286 "req_id": 1 00:07:05.286 } 00:07:05.286 Got JSON-RPC error response 00:07:05.286 response: 00:07:05.286 { 00:07:05.286 "code": -32603, 00:07:05.286 "message": "Failed to claim CPU core: 2" 00:07:05.286 } 00:07:05.286 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:05.286 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:05.286 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:05.286 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:05.286 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:05.286 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59559 /var/tmp/spdk.sock 00:07:05.286 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59559 ']' 00:07:05.286 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:05.286 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:05.286 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:05.286 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.286 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.546 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.546 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:05.546 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59577 /var/tmp/spdk2.sock 00:07:05.546 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 59577 ']' 00:07:05.546 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:05.546 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:05.546 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:05.546 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.546 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.805 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.805 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:05.805 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:05.805 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:05.805 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:05.805 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:05.805 00:07:05.805 real 0m4.690s 00:07:05.805 user 0m1.324s 00:07:05.805 sys 0m0.190s 00:07:05.805 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.805 03:09:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.805 ************************************ 00:07:05.805 END TEST locking_overlapped_coremask_via_rpc 00:07:05.805 ************************************ 00:07:05.805 03:09:49 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:05.805 03:09:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59559 ]] 00:07:05.805 03:09:49 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59559 00:07:05.805 03:09:49 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59559 ']' 00:07:05.805 03:09:49 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59559 00:07:05.805 03:09:49 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:05.805 03:09:49 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:05.805 03:09:49 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59559 00:07:05.805 03:09:49 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:05.805 03:09:49 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:05.805 killing process with pid 59559 00:07:05.805 03:09:49 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59559' 00:07:05.805 03:09:49 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59559 00:07:05.805 03:09:49 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59559 00:07:09.097 03:09:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59577 ]] 00:07:09.097 03:09:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59577 00:07:09.097 03:09:51 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59577 ']' 00:07:09.097 03:09:51 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59577 00:07:09.097 03:09:51 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:09.097 03:09:51 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:09.097 03:09:51 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59577 00:07:09.097 03:09:51 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:09.097 03:09:51 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:09.097 killing process with pid 59577 00:07:09.097 03:09:51 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59577' 00:07:09.097 03:09:51 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 59577 00:07:09.097 03:09:51 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 59577 00:07:11.631 03:09:54 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:11.631 03:09:54 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:11.631 03:09:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59559 ]] 00:07:11.631 03:09:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59559 00:07:11.631 03:09:54 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59559 ']' 00:07:11.631 03:09:54 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59559 00:07:11.631 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59559) - No such process 00:07:11.631 03:09:54 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59559 is not found' 00:07:11.631 Process with pid 59559 is not found 00:07:11.631 03:09:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59577 ]] 00:07:11.631 03:09:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59577 00:07:11.631 03:09:54 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 59577 ']' 00:07:11.631 03:09:54 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 59577 00:07:11.632 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (59577) - No such process 00:07:11.632 Process with pid 59577 is not found 00:07:11.632 03:09:54 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 59577 is not found' 00:07:11.632 03:09:54 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:11.632 00:07:11.632 real 0m52.681s 00:07:11.632 user 1m29.787s 00:07:11.632 sys 0m6.592s 00:07:11.632 03:09:54 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.632 03:09:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.632 ************************************ 00:07:11.632 END TEST cpu_locks 00:07:11.632 ************************************ 00:07:11.632 00:07:11.632 real 1m25.681s 00:07:11.632 user 2m35.320s 00:07:11.632 sys 0m10.639s 00:07:11.632 03:09:54 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:11.632 03:09:54 event -- common/autotest_common.sh@10 -- # set +x 00:07:11.632 ************************************ 00:07:11.632 END TEST event 00:07:11.632 ************************************ 00:07:11.632 03:09:54 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:11.632 03:09:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:11.632 03:09:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.632 03:09:54 -- common/autotest_common.sh@10 -- # set +x 00:07:11.632 ************************************ 00:07:11.632 START TEST thread 00:07:11.632 ************************************ 00:07:11.632 03:09:54 thread -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:11.632 * Looking for test storage... 00:07:11.632 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:11.632 03:09:54 thread -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:11.632 03:09:54 thread -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:11.632 03:09:54 thread -- common/autotest_common.sh@1681 -- # lcov --version 00:07:11.632 03:09:54 thread -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:11.632 03:09:54 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:11.632 03:09:54 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:11.632 03:09:54 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:11.632 03:09:54 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:11.632 03:09:54 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:11.632 03:09:54 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:11.632 03:09:54 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:11.632 03:09:54 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:11.632 03:09:54 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:11.632 03:09:54 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:11.632 03:09:54 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:11.632 03:09:54 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:11.632 03:09:54 thread -- scripts/common.sh@345 -- # : 1 00:07:11.632 03:09:54 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:11.632 03:09:54 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:11.632 03:09:54 thread -- scripts/common.sh@365 -- # decimal 1 00:07:11.632 03:09:54 thread -- scripts/common.sh@353 -- # local d=1 00:07:11.632 03:09:54 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:11.632 03:09:54 thread -- scripts/common.sh@355 -- # echo 1 00:07:11.632 03:09:54 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:11.632 03:09:54 thread -- scripts/common.sh@366 -- # decimal 2 00:07:11.632 03:09:54 thread -- scripts/common.sh@353 -- # local d=2 00:07:11.632 03:09:54 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:11.632 03:09:54 thread -- scripts/common.sh@355 -- # echo 2 00:07:11.632 03:09:54 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:11.632 03:09:54 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:11.632 03:09:54 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:11.632 03:09:54 thread -- scripts/common.sh@368 -- # return 0 00:07:11.632 03:09:54 thread -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:11.632 03:09:54 thread -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:11.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.632 --rc genhtml_branch_coverage=1 00:07:11.632 --rc genhtml_function_coverage=1 00:07:11.632 --rc genhtml_legend=1 00:07:11.632 --rc geninfo_all_blocks=1 00:07:11.632 --rc geninfo_unexecuted_blocks=1 00:07:11.632 00:07:11.632 ' 00:07:11.632 03:09:54 thread -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:11.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.632 --rc genhtml_branch_coverage=1 00:07:11.632 --rc genhtml_function_coverage=1 00:07:11.632 --rc genhtml_legend=1 00:07:11.632 --rc geninfo_all_blocks=1 00:07:11.632 --rc geninfo_unexecuted_blocks=1 00:07:11.632 00:07:11.632 ' 00:07:11.632 03:09:54 thread -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:11.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.632 --rc genhtml_branch_coverage=1 00:07:11.632 --rc genhtml_function_coverage=1 00:07:11.632 --rc genhtml_legend=1 00:07:11.632 --rc geninfo_all_blocks=1 00:07:11.632 --rc geninfo_unexecuted_blocks=1 00:07:11.632 00:07:11.632 ' 00:07:11.632 03:09:54 thread -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:11.632 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:11.632 --rc genhtml_branch_coverage=1 00:07:11.632 --rc genhtml_function_coverage=1 00:07:11.632 --rc genhtml_legend=1 00:07:11.632 --rc geninfo_all_blocks=1 00:07:11.632 --rc geninfo_unexecuted_blocks=1 00:07:11.632 00:07:11.632 ' 00:07:11.632 03:09:54 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:11.632 03:09:54 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:11.632 03:09:54 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:11.632 03:09:54 thread -- common/autotest_common.sh@10 -- # set +x 00:07:11.632 ************************************ 00:07:11.632 START TEST thread_poller_perf 00:07:11.632 ************************************ 00:07:11.632 03:09:54 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:11.632 [2024-10-09 03:09:54.882482] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:07:11.632 [2024-10-09 03:09:54.882580] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59783 ] 00:07:11.891 [2024-10-09 03:09:55.046677] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.151 [2024-10-09 03:09:55.311878] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.151 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:13.530 [2024-10-09T03:09:56.833Z] ====================================== 00:07:13.530 [2024-10-09T03:09:56.833Z] busy:2298778940 (cyc) 00:07:13.530 [2024-10-09T03:09:56.833Z] total_run_count: 390000 00:07:13.530 [2024-10-09T03:09:56.833Z] tsc_hz: 2290000000 (cyc) 00:07:13.530 [2024-10-09T03:09:56.833Z] ====================================== 00:07:13.530 [2024-10-09T03:09:56.833Z] poller_cost: 5894 (cyc), 2573 (nsec) 00:07:13.530 ************************************ 00:07:13.530 END TEST thread_poller_perf 00:07:13.530 ************************************ 00:07:13.530 00:07:13.530 real 0m1.844s 00:07:13.530 user 0m1.624s 00:07:13.530 sys 0m0.111s 00:07:13.530 03:09:56 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:13.530 03:09:56 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:13.530 03:09:56 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:13.530 03:09:56 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:13.530 03:09:56 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:13.530 03:09:56 thread -- common/autotest_common.sh@10 -- # set +x 00:07:13.530 ************************************ 00:07:13.530 START TEST thread_poller_perf 00:07:13.530 ************************************ 00:07:13.530 03:09:56 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:13.530 [2024-10-09 03:09:56.811954] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:07:13.530 [2024-10-09 03:09:56.812055] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59820 ] 00:07:13.789 [2024-10-09 03:09:56.973804] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.049 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:14.049 [2024-10-09 03:09:57.187423] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.423 [2024-10-09T03:09:58.726Z] ====================================== 00:07:15.423 [2024-10-09T03:09:58.726Z] busy:2293502060 (cyc) 00:07:15.423 [2024-10-09T03:09:58.726Z] total_run_count: 5334000 00:07:15.423 [2024-10-09T03:09:58.726Z] tsc_hz: 2290000000 (cyc) 00:07:15.423 [2024-10-09T03:09:58.726Z] ====================================== 00:07:15.423 [2024-10-09T03:09:58.726Z] poller_cost: 429 (cyc), 187 (nsec) 00:07:15.423 00:07:15.423 real 0m1.793s 00:07:15.423 user 0m1.587s 00:07:15.423 sys 0m0.098s 00:07:15.423 03:09:58 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.423 03:09:58 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:15.423 ************************************ 00:07:15.423 END TEST thread_poller_perf 00:07:15.423 ************************************ 00:07:15.423 03:09:58 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:15.423 00:07:15.423 real 0m3.977s 00:07:15.423 user 0m3.363s 00:07:15.423 sys 0m0.408s 00:07:15.423 03:09:58 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:15.423 03:09:58 thread -- common/autotest_common.sh@10 -- # set +x 00:07:15.423 ************************************ 00:07:15.423 END TEST thread 00:07:15.423 ************************************ 00:07:15.423 03:09:58 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:15.423 03:09:58 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:15.423 03:09:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:15.423 03:09:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:15.423 03:09:58 -- common/autotest_common.sh@10 -- # set +x 00:07:15.423 ************************************ 00:07:15.423 START TEST app_cmdline 00:07:15.423 ************************************ 00:07:15.423 03:09:58 app_cmdline -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:07:15.683 * Looking for test storage... 00:07:15.683 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:15.683 03:09:58 app_cmdline -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:15.683 03:09:58 app_cmdline -- common/autotest_common.sh@1681 -- # lcov --version 00:07:15.683 03:09:58 app_cmdline -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:15.683 03:09:58 app_cmdline -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:15.683 03:09:58 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:15.683 03:09:58 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:15.683 03:09:58 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:15.683 03:09:58 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:15.683 03:09:58 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:15.683 03:09:58 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:15.683 03:09:58 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:15.683 03:09:58 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:15.683 03:09:58 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:15.683 03:09:58 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:15.683 03:09:58 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:15.683 03:09:58 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:15.683 03:09:58 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:15.683 03:09:58 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:15.683 03:09:58 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:15.683 03:09:58 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:15.683 03:09:58 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:15.683 03:09:58 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:15.683 03:09:58 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:15.683 03:09:58 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:15.683 03:09:58 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:15.683 03:09:58 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:15.683 03:09:58 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:15.683 03:09:58 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:15.683 03:09:58 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:15.683 03:09:58 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:15.683 03:09:58 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:15.683 03:09:58 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:15.683 03:09:58 app_cmdline -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:15.683 03:09:58 app_cmdline -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:15.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.683 --rc genhtml_branch_coverage=1 00:07:15.683 --rc genhtml_function_coverage=1 00:07:15.683 --rc genhtml_legend=1 00:07:15.683 --rc geninfo_all_blocks=1 00:07:15.683 --rc geninfo_unexecuted_blocks=1 00:07:15.683 00:07:15.683 ' 00:07:15.683 03:09:58 app_cmdline -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:15.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.683 --rc genhtml_branch_coverage=1 00:07:15.683 --rc genhtml_function_coverage=1 00:07:15.683 --rc genhtml_legend=1 00:07:15.683 --rc geninfo_all_blocks=1 00:07:15.683 --rc geninfo_unexecuted_blocks=1 00:07:15.683 00:07:15.683 ' 00:07:15.683 03:09:58 app_cmdline -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:15.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.683 --rc genhtml_branch_coverage=1 00:07:15.683 --rc genhtml_function_coverage=1 00:07:15.683 --rc genhtml_legend=1 00:07:15.683 --rc geninfo_all_blocks=1 00:07:15.683 --rc geninfo_unexecuted_blocks=1 00:07:15.683 00:07:15.683 ' 00:07:15.683 03:09:58 app_cmdline -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:15.683 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.683 --rc genhtml_branch_coverage=1 00:07:15.683 --rc genhtml_function_coverage=1 00:07:15.683 --rc genhtml_legend=1 00:07:15.683 --rc geninfo_all_blocks=1 00:07:15.683 --rc geninfo_unexecuted_blocks=1 00:07:15.683 00:07:15.683 ' 00:07:15.683 03:09:58 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:15.683 03:09:58 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59911 00:07:15.683 03:09:58 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:15.683 03:09:58 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59911 00:07:15.683 03:09:58 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 59911 ']' 00:07:15.683 03:09:58 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.683 03:09:58 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.683 03:09:58 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.683 03:09:58 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.683 03:09:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:15.683 [2024-10-09 03:09:58.983480] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:07:15.683 [2024-10-09 03:09:58.983674] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59911 ] 00:07:15.942 [2024-10-09 03:09:59.149264] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.200 [2024-10-09 03:09:59.357958] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.136 03:10:00 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:17.136 03:10:00 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:17.136 03:10:00 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:07:17.136 { 00:07:17.136 "version": "SPDK v25.01-pre git sha1 3c4904078", 00:07:17.136 "fields": { 00:07:17.136 "major": 25, 00:07:17.136 "minor": 1, 00:07:17.136 "patch": 0, 00:07:17.136 "suffix": "-pre", 00:07:17.136 "commit": "3c4904078" 00:07:17.136 } 00:07:17.136 } 00:07:17.136 03:10:00 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:17.136 03:10:00 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:17.136 03:10:00 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:17.136 03:10:00 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:17.136 03:10:00 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:17.136 03:10:00 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:17.136 03:10:00 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:17.136 03:10:00 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:17.136 03:10:00 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:17.136 03:10:00 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:17.395 03:10:00 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:17.395 03:10:00 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:17.395 03:10:00 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:17.395 03:10:00 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:17.395 03:10:00 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:17.395 03:10:00 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:17.395 03:10:00 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.395 03:10:00 app_cmdline -- common/autotest_common.sh@642 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:17.395 03:10:00 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.395 03:10:00 app_cmdline -- common/autotest_common.sh@644 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:17.395 03:10:00 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:17.395 03:10:00 app_cmdline -- common/autotest_common.sh@644 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:17.395 03:10:00 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:17.395 03:10:00 app_cmdline -- common/autotest_common.sh@653 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:17.395 request: 00:07:17.395 { 00:07:17.395 "method": "env_dpdk_get_mem_stats", 00:07:17.395 "req_id": 1 00:07:17.395 } 00:07:17.395 Got JSON-RPC error response 00:07:17.395 response: 00:07:17.395 { 00:07:17.395 "code": -32601, 00:07:17.395 "message": "Method not found" 00:07:17.395 } 00:07:17.395 03:10:00 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:17.395 03:10:00 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:17.395 03:10:00 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:17.395 03:10:00 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:17.395 03:10:00 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59911 00:07:17.395 03:10:00 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 59911 ']' 00:07:17.395 03:10:00 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 59911 00:07:17.395 03:10:00 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:17.395 03:10:00 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:17.395 03:10:00 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 59911 00:07:17.654 03:10:00 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:17.654 03:10:00 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:17.654 03:10:00 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 59911' 00:07:17.654 killing process with pid 59911 00:07:17.654 03:10:00 app_cmdline -- common/autotest_common.sh@969 -- # kill 59911 00:07:17.654 03:10:00 app_cmdline -- common/autotest_common.sh@974 -- # wait 59911 00:07:20.184 00:07:20.184 real 0m4.542s 00:07:20.184 user 0m4.766s 00:07:20.184 sys 0m0.608s 00:07:20.184 03:10:03 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.184 03:10:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:20.184 ************************************ 00:07:20.184 END TEST app_cmdline 00:07:20.184 ************************************ 00:07:20.184 03:10:03 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:20.184 03:10:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:20.184 03:10:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.184 03:10:03 -- common/autotest_common.sh@10 -- # set +x 00:07:20.184 ************************************ 00:07:20.184 START TEST version 00:07:20.184 ************************************ 00:07:20.184 03:10:03 version -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:07:20.184 * Looking for test storage... 00:07:20.184 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:07:20.184 03:10:03 version -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:20.184 03:10:03 version -- common/autotest_common.sh@1681 -- # lcov --version 00:07:20.184 03:10:03 version -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:20.184 03:10:03 version -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:20.184 03:10:03 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:20.184 03:10:03 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:20.184 03:10:03 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:20.184 03:10:03 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.184 03:10:03 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:20.184 03:10:03 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:20.184 03:10:03 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:20.184 03:10:03 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:20.184 03:10:03 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:20.184 03:10:03 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:20.184 03:10:03 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:20.184 03:10:03 version -- scripts/common.sh@344 -- # case "$op" in 00:07:20.184 03:10:03 version -- scripts/common.sh@345 -- # : 1 00:07:20.184 03:10:03 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:20.184 03:10:03 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.184 03:10:03 version -- scripts/common.sh@365 -- # decimal 1 00:07:20.184 03:10:03 version -- scripts/common.sh@353 -- # local d=1 00:07:20.184 03:10:03 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.184 03:10:03 version -- scripts/common.sh@355 -- # echo 1 00:07:20.450 03:10:03 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:20.450 03:10:03 version -- scripts/common.sh@366 -- # decimal 2 00:07:20.450 03:10:03 version -- scripts/common.sh@353 -- # local d=2 00:07:20.450 03:10:03 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.450 03:10:03 version -- scripts/common.sh@355 -- # echo 2 00:07:20.450 03:10:03 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:20.450 03:10:03 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:20.450 03:10:03 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:20.450 03:10:03 version -- scripts/common.sh@368 -- # return 0 00:07:20.450 03:10:03 version -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.450 03:10:03 version -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:20.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.450 --rc genhtml_branch_coverage=1 00:07:20.450 --rc genhtml_function_coverage=1 00:07:20.450 --rc genhtml_legend=1 00:07:20.450 --rc geninfo_all_blocks=1 00:07:20.450 --rc geninfo_unexecuted_blocks=1 00:07:20.450 00:07:20.450 ' 00:07:20.450 03:10:03 version -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:20.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.450 --rc genhtml_branch_coverage=1 00:07:20.450 --rc genhtml_function_coverage=1 00:07:20.450 --rc genhtml_legend=1 00:07:20.450 --rc geninfo_all_blocks=1 00:07:20.450 --rc geninfo_unexecuted_blocks=1 00:07:20.450 00:07:20.450 ' 00:07:20.450 03:10:03 version -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:20.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.450 --rc genhtml_branch_coverage=1 00:07:20.450 --rc genhtml_function_coverage=1 00:07:20.450 --rc genhtml_legend=1 00:07:20.450 --rc geninfo_all_blocks=1 00:07:20.450 --rc geninfo_unexecuted_blocks=1 00:07:20.450 00:07:20.450 ' 00:07:20.450 03:10:03 version -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:20.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.450 --rc genhtml_branch_coverage=1 00:07:20.450 --rc genhtml_function_coverage=1 00:07:20.450 --rc genhtml_legend=1 00:07:20.450 --rc geninfo_all_blocks=1 00:07:20.450 --rc geninfo_unexecuted_blocks=1 00:07:20.450 00:07:20.450 ' 00:07:20.450 03:10:03 version -- app/version.sh@17 -- # get_header_version major 00:07:20.450 03:10:03 version -- app/version.sh@14 -- # cut -f2 00:07:20.450 03:10:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:20.450 03:10:03 version -- app/version.sh@14 -- # tr -d '"' 00:07:20.450 03:10:03 version -- app/version.sh@17 -- # major=25 00:07:20.450 03:10:03 version -- app/version.sh@18 -- # get_header_version minor 00:07:20.450 03:10:03 version -- app/version.sh@14 -- # cut -f2 00:07:20.450 03:10:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:20.450 03:10:03 version -- app/version.sh@14 -- # tr -d '"' 00:07:20.450 03:10:03 version -- app/version.sh@18 -- # minor=1 00:07:20.450 03:10:03 version -- app/version.sh@19 -- # get_header_version patch 00:07:20.450 03:10:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:20.450 03:10:03 version -- app/version.sh@14 -- # cut -f2 00:07:20.450 03:10:03 version -- app/version.sh@14 -- # tr -d '"' 00:07:20.450 03:10:03 version -- app/version.sh@19 -- # patch=0 00:07:20.450 03:10:03 version -- app/version.sh@20 -- # get_header_version suffix 00:07:20.450 03:10:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:07:20.450 03:10:03 version -- app/version.sh@14 -- # cut -f2 00:07:20.450 03:10:03 version -- app/version.sh@14 -- # tr -d '"' 00:07:20.450 03:10:03 version -- app/version.sh@20 -- # suffix=-pre 00:07:20.450 03:10:03 version -- app/version.sh@22 -- # version=25.1 00:07:20.450 03:10:03 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:20.450 03:10:03 version -- app/version.sh@28 -- # version=25.1rc0 00:07:20.450 03:10:03 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:07:20.450 03:10:03 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:20.450 03:10:03 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:20.450 03:10:03 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:20.450 00:07:20.450 real 0m0.309s 00:07:20.450 user 0m0.189s 00:07:20.450 sys 0m0.171s 00:07:20.450 ************************************ 00:07:20.450 END TEST version 00:07:20.450 ************************************ 00:07:20.450 03:10:03 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.451 03:10:03 version -- common/autotest_common.sh@10 -- # set +x 00:07:20.451 03:10:03 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:20.451 03:10:03 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:07:20.451 03:10:03 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:20.451 03:10:03 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:20.451 03:10:03 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.451 03:10:03 -- common/autotest_common.sh@10 -- # set +x 00:07:20.451 ************************************ 00:07:20.451 START TEST bdev_raid 00:07:20.451 ************************************ 00:07:20.451 03:10:03 bdev_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:20.721 * Looking for test storage... 00:07:20.721 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:20.721 03:10:03 bdev_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:07:20.721 03:10:03 bdev_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:07:20.721 03:10:03 bdev_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:07:20.721 03:10:03 bdev_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:07:20.721 03:10:03 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:20.721 03:10:03 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:20.721 03:10:03 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:20.721 03:10:03 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.721 03:10:03 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:07:20.721 03:10:03 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:07:20.721 03:10:03 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:07:20.721 03:10:03 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:07:20.721 03:10:03 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:07:20.721 03:10:03 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:07:20.721 03:10:03 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:20.721 03:10:03 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:07:20.721 03:10:03 bdev_raid -- scripts/common.sh@345 -- # : 1 00:07:20.721 03:10:03 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:20.721 03:10:03 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.721 03:10:03 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:07:20.721 03:10:03 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:07:20.721 03:10:03 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.721 03:10:03 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:07:20.721 03:10:03 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:07:20.721 03:10:03 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:07:20.722 03:10:03 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:07:20.722 03:10:03 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.722 03:10:03 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:07:20.722 03:10:03 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:07:20.722 03:10:03 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:20.722 03:10:03 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:20.722 03:10:03 bdev_raid -- scripts/common.sh@368 -- # return 0 00:07:20.722 03:10:03 bdev_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.722 03:10:03 bdev_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:07:20.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.722 --rc genhtml_branch_coverage=1 00:07:20.722 --rc genhtml_function_coverage=1 00:07:20.722 --rc genhtml_legend=1 00:07:20.722 --rc geninfo_all_blocks=1 00:07:20.722 --rc geninfo_unexecuted_blocks=1 00:07:20.722 00:07:20.722 ' 00:07:20.722 03:10:03 bdev_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:07:20.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.722 --rc genhtml_branch_coverage=1 00:07:20.722 --rc genhtml_function_coverage=1 00:07:20.722 --rc genhtml_legend=1 00:07:20.722 --rc geninfo_all_blocks=1 00:07:20.722 --rc geninfo_unexecuted_blocks=1 00:07:20.722 00:07:20.722 ' 00:07:20.722 03:10:03 bdev_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:07:20.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.722 --rc genhtml_branch_coverage=1 00:07:20.722 --rc genhtml_function_coverage=1 00:07:20.722 --rc genhtml_legend=1 00:07:20.722 --rc geninfo_all_blocks=1 00:07:20.722 --rc geninfo_unexecuted_blocks=1 00:07:20.722 00:07:20.722 ' 00:07:20.722 03:10:03 bdev_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:07:20.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.722 --rc genhtml_branch_coverage=1 00:07:20.722 --rc genhtml_function_coverage=1 00:07:20.722 --rc genhtml_legend=1 00:07:20.722 --rc geninfo_all_blocks=1 00:07:20.722 --rc geninfo_unexecuted_blocks=1 00:07:20.722 00:07:20.722 ' 00:07:20.722 03:10:03 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:20.722 03:10:03 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:07:20.722 03:10:03 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:07:20.722 03:10:03 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:07:20.722 03:10:03 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:07:20.722 03:10:03 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:07:20.722 03:10:03 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:07:20.722 03:10:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:20.722 03:10:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.722 03:10:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:20.722 ************************************ 00:07:20.722 START TEST raid1_resize_data_offset_test 00:07:20.722 ************************************ 00:07:20.722 03:10:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1125 -- # raid_resize_data_offset_test 00:07:20.722 03:10:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=60104 00:07:20.722 Process raid pid: 60104 00:07:20.722 03:10:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:20.722 03:10:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 60104' 00:07:20.722 03:10:03 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 60104 00:07:20.722 03:10:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@831 -- # '[' -z 60104 ']' 00:07:20.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.722 03:10:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.722 03:10:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:20.722 03:10:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.722 03:10:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:20.722 03:10:03 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.722 [2024-10-09 03:10:03.974161] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:07:20.722 [2024-10-09 03:10:03.975194] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:20.981 [2024-10-09 03:10:04.163280] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:21.238 [2024-10-09 03:10:04.369187] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.496 [2024-10-09 03:10:04.566689] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.496 [2024-10-09 03:10:04.566727] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:21.755 03:10:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:21.755 03:10:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # return 0 00:07:21.755 03:10:04 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:07:21.755 03:10:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.755 03:10:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.755 malloc0 00:07:21.755 03:10:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.755 03:10:04 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:07:21.755 03:10:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.755 03:10:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.755 malloc1 00:07:21.755 03:10:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.755 03:10:04 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:07:21.755 03:10:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.755 03:10:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.755 null0 00:07:21.755 03:10:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.755 03:10:04 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:07:21.755 03:10:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.755 03:10:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.755 [2024-10-09 03:10:04.975767] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:07:21.755 [2024-10-09 03:10:04.977530] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:21.755 [2024-10-09 03:10:04.977617] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:07:21.755 [2024-10-09 03:10:04.977782] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:21.755 [2024-10-09 03:10:04.977832] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:07:21.755 [2024-10-09 03:10:04.978106] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:21.755 [2024-10-09 03:10:04.978301] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:21.755 [2024-10-09 03:10:04.978346] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:21.755 [2024-10-09 03:10:04.978518] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:21.755 03:10:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.755 03:10:04 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:21.755 03:10:04 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:21.755 03:10:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.755 03:10:04 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.755 03:10:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.755 03:10:05 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:07:21.755 03:10:05 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:07:21.755 03:10:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.755 03:10:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.755 [2024-10-09 03:10:05.035694] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:07:21.755 03:10:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:21.755 03:10:05 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:07:21.755 03:10:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:21.755 03:10:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.324 malloc2 00:07:22.324 03:10:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.324 03:10:05 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:07:22.324 03:10:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.324 03:10:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.324 [2024-10-09 03:10:05.570805] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:22.324 [2024-10-09 03:10:05.585619] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:22.324 03:10:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.324 [2024-10-09 03:10:05.587339] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:07:22.324 03:10:05 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:22.324 03:10:05 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:07:22.324 03:10:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:22.324 03:10:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.324 03:10:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:22.584 03:10:05 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:07:22.584 03:10:05 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 60104 00:07:22.584 03:10:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@950 -- # '[' -z 60104 ']' 00:07:22.584 03:10:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # kill -0 60104 00:07:22.584 03:10:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # uname 00:07:22.584 03:10:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:22.584 03:10:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60104 00:07:22.584 killing process with pid 60104 00:07:22.584 03:10:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:22.584 03:10:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:22.584 03:10:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60104' 00:07:22.584 03:10:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@969 -- # kill 60104 00:07:22.584 [2024-10-09 03:10:05.678674] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:22.584 03:10:05 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@974 -- # wait 60104 00:07:22.584 [2024-10-09 03:10:05.679374] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:07:22.584 [2024-10-09 03:10:05.679514] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:22.584 [2024-10-09 03:10:05.679538] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:07:22.584 [2024-10-09 03:10:05.707220] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:22.584 [2024-10-09 03:10:05.707506] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:22.584 [2024-10-09 03:10:05.707525] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:24.491 [2024-10-09 03:10:07.521237] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:25.871 ************************************ 00:07:25.872 END TEST raid1_resize_data_offset_test 00:07:25.872 ************************************ 00:07:25.872 03:10:08 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:07:25.872 00:07:25.872 real 0m4.998s 00:07:25.872 user 0m4.896s 00:07:25.872 sys 0m0.541s 00:07:25.872 03:10:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:25.872 03:10:08 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.872 03:10:08 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:07:25.872 03:10:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:25.872 03:10:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:25.872 03:10:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:25.872 ************************************ 00:07:25.872 START TEST raid0_resize_superblock_test 00:07:25.872 ************************************ 00:07:25.872 03:10:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 0 00:07:25.872 03:10:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:07:25.872 03:10:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60193 00:07:25.872 03:10:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60193' 00:07:25.872 03:10:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:25.872 Process raid pid: 60193 00:07:25.872 03:10:08 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60193 00:07:25.872 03:10:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 60193 ']' 00:07:25.872 03:10:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:25.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:25.872 03:10:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:25.872 03:10:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:25.872 03:10:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:25.872 03:10:08 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:25.872 [2024-10-09 03:10:09.043207] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:07:25.872 [2024-10-09 03:10:09.043321] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:26.131 [2024-10-09 03:10:09.208547] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.390 [2024-10-09 03:10:09.460336] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.650 [2024-10-09 03:10:09.701902] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.650 [2024-10-09 03:10:09.701955] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:26.650 03:10:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:26.650 03:10:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:26.650 03:10:09 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:26.650 03:10:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:26.650 03:10:09 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.590 malloc0 00:07:27.590 03:10:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.590 03:10:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:27.590 03:10:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.590 03:10:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.590 [2024-10-09 03:10:10.546066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:27.590 [2024-10-09 03:10:10.546153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:27.590 [2024-10-09 03:10:10.546180] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:27.590 [2024-10-09 03:10:10.546193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:27.590 [2024-10-09 03:10:10.548585] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:27.590 [2024-10-09 03:10:10.548720] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:27.590 pt0 00:07:27.590 03:10:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.590 03:10:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:27.590 03:10:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.590 03:10:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.590 763dc619-e81b-4ca3-b1fb-cdaa46f9cc7c 00:07:27.590 03:10:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.590 03:10:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:27.590 03:10:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.590 03:10:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.590 2baf8f7e-031b-4ea0-8bcf-97881a5b90e6 00:07:27.590 03:10:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.590 03:10:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:27.590 03:10:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.590 03:10:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.590 4d5cd7a7-b9a4-4197-960b-36d5d340c827 00:07:27.590 03:10:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.590 03:10:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:27.590 03:10:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:27.590 03:10:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.590 03:10:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.590 [2024-10-09 03:10:10.754967] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2baf8f7e-031b-4ea0-8bcf-97881a5b90e6 is claimed 00:07:27.590 [2024-10-09 03:10:10.755088] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4d5cd7a7-b9a4-4197-960b-36d5d340c827 is claimed 00:07:27.590 [2024-10-09 03:10:10.755251] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:27.590 [2024-10-09 03:10:10.755273] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:07:27.590 [2024-10-09 03:10:10.755552] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:27.590 [2024-10-09 03:10:10.755752] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:27.590 [2024-10-09 03:10:10.755763] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:27.590 [2024-10-09 03:10:10.755970] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:27.590 03:10:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.590 03:10:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:27.590 03:10:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:27.590 03:10:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.590 03:10:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.590 03:10:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.590 03:10:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:27.590 03:10:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:27.590 03:10:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.590 03:10:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:27.590 03:10:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.590 03:10:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.591 03:10:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:27.591 03:10:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:27.591 03:10:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:27.591 03:10:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.591 03:10:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.591 03:10:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:27.591 03:10:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:07:27.591 [2024-10-09 03:10:10.862971] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:27.591 03:10:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.850 03:10:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:27.850 03:10:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:27.850 03:10:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:07:27.850 03:10:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:27.850 03:10:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.850 03:10:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.850 [2024-10-09 03:10:10.910944] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:27.850 [2024-10-09 03:10:10.911055] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '2baf8f7e-031b-4ea0-8bcf-97881a5b90e6' was resized: old size 131072, new size 204800 00:07:27.850 03:10:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.850 03:10:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:27.850 03:10:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.850 03:10:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.850 [2024-10-09 03:10:10.922780] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:27.850 [2024-10-09 03:10:10.922864] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '4d5cd7a7-b9a4-4197-960b-36d5d340c827' was resized: old size 131072, new size 204800 00:07:27.850 [2024-10-09 03:10:10.922932] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:07:27.850 03:10:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.851 03:10:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:27.851 03:10:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:27.851 03:10:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.851 03:10:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.851 03:10:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.851 03:10:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:27.851 03:10:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:27.851 03:10:10 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:27.851 03:10:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.851 03:10:10 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.851 [2024-10-09 03:10:11.038602] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.851 [2024-10-09 03:10:11.082362] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:27.851 [2024-10-09 03:10:11.082431] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:27.851 [2024-10-09 03:10:11.082444] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:27.851 [2024-10-09 03:10:11.082463] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:27.851 [2024-10-09 03:10:11.082592] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:27.851 [2024-10-09 03:10:11.082630] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:27.851 [2024-10-09 03:10:11.082642] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.851 [2024-10-09 03:10:11.094277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:27.851 [2024-10-09 03:10:11.094341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:27.851 [2024-10-09 03:10:11.094364] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:27.851 [2024-10-09 03:10:11.094376] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:27.851 [2024-10-09 03:10:11.096750] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:27.851 [2024-10-09 03:10:11.096788] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:27.851 pt0 00:07:27.851 [2024-10-09 03:10:11.098468] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 2baf8f7e-031b-4ea0-8bcf-97881a5b90e6 00:07:27.851 [2024-10-09 03:10:11.098554] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 2baf8f7e-031b-4ea0-8bcf-97881a5b90e6 is claimed 00:07:27.851 [2024-10-09 03:10:11.098683] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 4d5cd7a7-b9a4-4197-960b-36d5d340c827 00:07:27.851 [2024-10-09 03:10:11.098702] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 4d5cd7a7-b9a4-4197-960b-36d5d340c827 is claimed 00:07:27.851 [2024-10-09 03:10:11.098859] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 4d5cd7a7-b9a4-4197-960b-36d5d340c827 (2) smaller than existing raid bdev Raid (3) 00:07:27.851 [2024-10-09 03:10:11.098882] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 2baf8f7e-031b-4ea0-8bcf-97881a5b90e6: File exists 00:07:27.851 [2024-10-09 03:10:11.098916] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:27.851 [2024-10-09 03:10:11.098928] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:07:27.851 [2024-10-09 03:10:11.099184] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:27.851 [2024-10-09 03:10:11.099326] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:27.851 [2024-10-09 03:10:11.099341] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:27.851 [2024-10-09 03:10:11.099495] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:27.851 [2024-10-09 03:10:11.118747] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60193 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 60193 ']' 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 60193 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:27.851 03:10:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60193 00:07:28.111 killing process with pid 60193 00:07:28.111 03:10:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:28.111 03:10:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:28.111 03:10:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60193' 00:07:28.111 03:10:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 60193 00:07:28.111 [2024-10-09 03:10:11.163279] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:28.111 [2024-10-09 03:10:11.163331] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:28.111 [2024-10-09 03:10:11.163366] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:28.111 [2024-10-09 03:10:11.163374] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:28.111 03:10:11 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 60193 00:07:29.491 [2024-10-09 03:10:12.732337] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:30.870 03:10:14 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:30.870 00:07:30.870 real 0m5.146s 00:07:30.870 user 0m5.198s 00:07:30.870 sys 0m0.727s 00:07:30.870 03:10:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:30.870 03:10:14 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.870 ************************************ 00:07:30.870 END TEST raid0_resize_superblock_test 00:07:30.870 ************************************ 00:07:30.870 03:10:14 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:07:30.870 03:10:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:30.870 03:10:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:30.870 03:10:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:31.129 ************************************ 00:07:31.129 START TEST raid1_resize_superblock_test 00:07:31.129 ************************************ 00:07:31.129 03:10:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1125 -- # raid_resize_superblock_test 1 00:07:31.129 03:10:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:07:31.129 03:10:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60298 00:07:31.129 03:10:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:31.129 Process raid pid: 60298 00:07:31.129 03:10:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60298' 00:07:31.129 03:10:14 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60298 00:07:31.129 03:10:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 60298 ']' 00:07:31.129 03:10:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.129 03:10:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:31.129 03:10:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.129 03:10:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:31.129 03:10:14 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:31.129 [2024-10-09 03:10:14.259334] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:07:31.129 [2024-10-09 03:10:14.259559] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:31.129 [2024-10-09 03:10:14.425531] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.388 [2024-10-09 03:10:14.678928] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.647 [2024-10-09 03:10:14.930871] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.647 [2024-10-09 03:10:14.931020] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.906 03:10:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:31.906 03:10:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:07:31.906 03:10:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:07:31.906 03:10:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:31.906 03:10:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.476 malloc0 00:07:32.476 03:10:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.476 03:10:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:32.476 03:10:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.476 03:10:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.476 [2024-10-09 03:10:15.723891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:32.476 [2024-10-09 03:10:15.723982] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:32.476 [2024-10-09 03:10:15.724010] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:07:32.476 [2024-10-09 03:10:15.724023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:32.476 [2024-10-09 03:10:15.726470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:32.476 [2024-10-09 03:10:15.726514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:32.476 pt0 00:07:32.476 03:10:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.476 03:10:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:07:32.476 03:10:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.476 03:10:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.736 b40cb65d-6eb0-4a8a-b531-95b1e92750d1 00:07:32.736 03:10:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.736 03:10:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:07:32.736 03:10:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.736 03:10:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.736 8b1403a6-81aa-4c2a-97e6-b343ba27b330 00:07:32.736 03:10:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.736 03:10:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:07:32.736 03:10:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.736 03:10:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.736 d960352e-310b-4e56-9862-0730ef341b4f 00:07:32.736 03:10:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.736 03:10:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:07:32.736 03:10:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:07:32.736 03:10:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.736 03:10:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.736 [2024-10-09 03:10:15.931290] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8b1403a6-81aa-4c2a-97e6-b343ba27b330 is claimed 00:07:32.736 [2024-10-09 03:10:15.931407] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev d960352e-310b-4e56-9862-0730ef341b4f is claimed 00:07:32.736 [2024-10-09 03:10:15.931546] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:32.736 [2024-10-09 03:10:15.931564] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:07:32.736 [2024-10-09 03:10:15.931822] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:32.736 [2024-10-09 03:10:15.932055] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:32.736 [2024-10-09 03:10:15.932069] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:32.736 [2024-10-09 03:10:15.932245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:32.736 03:10:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.736 03:10:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:32.736 03:10:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:07:32.736 03:10:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.736 03:10:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.736 03:10:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.736 03:10:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:07:32.736 03:10:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:32.736 03:10:15 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:07:32.736 03:10:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.736 03:10:15 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.736 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.736 03:10:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:07:32.736 03:10:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:32.736 03:10:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:32.736 03:10:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:32.736 03:10:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:07:32.736 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.736 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.996 [2024-10-09 03:10:16.039263] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.996 [2024-10-09 03:10:16.087138] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:32.996 [2024-10-09 03:10:16.087209] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '8b1403a6-81aa-4c2a-97e6-b343ba27b330' was resized: old size 131072, new size 204800 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.996 [2024-10-09 03:10:16.099093] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:32.996 [2024-10-09 03:10:16.099116] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'd960352e-310b-4e56-9862-0730ef341b4f' was resized: old size 131072, new size 204800 00:07:32.996 [2024-10-09 03:10:16.099144] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.996 [2024-10-09 03:10:16.215077] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.996 [2024-10-09 03:10:16.258801] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:07:32.996 [2024-10-09 03:10:16.258920] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:07:32.996 [2024-10-09 03:10:16.258962] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:07:32.996 [2024-10-09 03:10:16.259095] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:32.996 [2024-10-09 03:10:16.259252] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:32.996 [2024-10-09 03:10:16.259313] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:32.996 [2024-10-09 03:10:16.259330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.996 [2024-10-09 03:10:16.270753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:07:32.996 [2024-10-09 03:10:16.270867] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:32.996 [2024-10-09 03:10:16.270893] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:07:32.996 [2024-10-09 03:10:16.270905] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:32.996 [2024-10-09 03:10:16.273345] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:32.996 [2024-10-09 03:10:16.273382] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:07:32.996 pt0 00:07:32.996 [2024-10-09 03:10:16.274985] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 8b1403a6-81aa-4c2a-97e6-b343ba27b330 00:07:32.996 [2024-10-09 03:10:16.275060] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev 8b1403a6-81aa-4c2a-97e6-b343ba27b330 is claimed 00:07:32.996 [2024-10-09 03:10:16.275177] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev d960352e-310b-4e56-9862-0730ef341b4f 00:07:32.996 [2024-10-09 03:10:16.275196] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev d960352e-310b-4e56-9862-0730ef341b4f is claimed 00:07:32.996 [2024-10-09 03:10:16.275341] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev d960352e-310b-4e56-9862-0730ef341b4f (2) smaller than existing raid bdev Raid (3) 00:07:32.996 [2024-10-09 03:10:16.275365] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 8b1403a6-81aa-4c2a-97e6-b343ba27b330: File exists 00:07:32.996 [2024-10-09 03:10:16.275399] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:07:32.996 [2024-10-09 03:10:16.275412] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:07:32.996 [2024-10-09 03:10:16.275664] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:07:32.996 [2024-10-09 03:10:16.275820] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:07:32.996 [2024-10-09 03:10:16.275829] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:07:32.996 [2024-10-09 03:10:16.275985] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:32.996 03:10:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:07:32.996 [2024-10-09 03:10:16.295259] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:33.256 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:33.256 03:10:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:33.256 03:10:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:07:33.256 03:10:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:07:33.256 03:10:16 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60298 00:07:33.256 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 60298 ']' 00:07:33.256 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # kill -0 60298 00:07:33.256 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # uname 00:07:33.256 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:33.256 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60298 00:07:33.256 killing process with pid 60298 00:07:33.256 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:33.256 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:33.256 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60298' 00:07:33.256 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@969 -- # kill 60298 00:07:33.256 [2024-10-09 03:10:16.378726] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:33.256 [2024-10-09 03:10:16.378779] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:33.256 [2024-10-09 03:10:16.378831] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:33.256 [2024-10-09 03:10:16.378855] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:07:33.256 03:10:16 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@974 -- # wait 60298 00:07:34.637 [2024-10-09 03:10:17.927907] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:36.019 ************************************ 00:07:36.019 END TEST raid1_resize_superblock_test 00:07:36.019 ************************************ 00:07:36.019 03:10:19 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:07:36.019 00:07:36.019 real 0m5.104s 00:07:36.019 user 0m5.119s 00:07:36.019 sys 0m0.762s 00:07:36.019 03:10:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:36.019 03:10:19 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:36.278 03:10:19 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:07:36.278 03:10:19 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:07:36.278 03:10:19 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:07:36.278 03:10:19 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:07:36.278 03:10:19 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:07:36.278 03:10:19 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:07:36.278 03:10:19 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:36.278 03:10:19 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:36.278 03:10:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:36.278 ************************************ 00:07:36.278 START TEST raid_function_test_raid0 00:07:36.278 ************************************ 00:07:36.278 03:10:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1125 -- # raid_function_test raid0 00:07:36.278 03:10:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:07:36.278 03:10:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:36.278 03:10:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:36.278 Process raid pid: 60405 00:07:36.278 03:10:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60405 00:07:36.278 03:10:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:36.278 03:10:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60405' 00:07:36.278 03:10:19 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60405 00:07:36.278 03:10:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@831 -- # '[' -z 60405 ']' 00:07:36.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.278 03:10:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.278 03:10:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:36.278 03:10:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.278 03:10:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:36.278 03:10:19 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:36.278 [2024-10-09 03:10:19.461814] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:07:36.278 [2024-10-09 03:10:19.461938] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.538 [2024-10-09 03:10:19.624452] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.798 [2024-10-09 03:10:19.870107] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.057 [2024-10-09 03:10:20.104330] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.057 [2024-10-09 03:10:20.104463] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:37.057 03:10:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:37.057 03:10:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # return 0 00:07:37.057 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:37.057 03:10:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.057 03:10:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:37.057 Base_1 00:07:37.057 03:10:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.057 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:37.057 03:10:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.057 03:10:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:37.316 Base_2 00:07:37.316 03:10:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.316 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:07:37.316 03:10:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.316 03:10:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:37.316 [2024-10-09 03:10:20.398914] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:37.316 [2024-10-09 03:10:20.400981] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:37.316 [2024-10-09 03:10:20.401147] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:37.316 [2024-10-09 03:10:20.401165] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:37.316 [2024-10-09 03:10:20.401420] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:37.316 [2024-10-09 03:10:20.401559] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:37.316 [2024-10-09 03:10:20.401567] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:37.316 [2024-10-09 03:10:20.401710] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.316 03:10:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.316 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:37.316 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:37.316 03:10:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:37.316 03:10:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:37.316 03:10:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:37.316 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:37.316 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:37.316 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:37.316 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:37.316 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:37.316 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:37.316 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:37.316 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:37.316 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:07:37.316 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:37.316 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:37.316 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:37.577 [2024-10-09 03:10:20.638445] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:37.577 /dev/nbd0 00:07:37.577 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:37.577 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:37.577 03:10:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:37.577 03:10:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@869 -- # local i 00:07:37.577 03:10:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:37.577 03:10:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:37.577 03:10:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:37.577 03:10:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # break 00:07:37.577 03:10:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:37.577 03:10:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:37.577 03:10:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:37.577 1+0 records in 00:07:37.577 1+0 records out 00:07:37.577 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391236 s, 10.5 MB/s 00:07:37.577 03:10:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:37.577 03:10:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@886 -- # size=4096 00:07:37.577 03:10:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:37.577 03:10:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:37.577 03:10:20 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # return 0 00:07:37.577 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:37.577 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:37.577 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:37.577 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:37.577 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:37.837 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:37.837 { 00:07:37.837 "nbd_device": "/dev/nbd0", 00:07:37.837 "bdev_name": "raid" 00:07:37.837 } 00:07:37.837 ]' 00:07:37.837 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:37.837 { 00:07:37.837 "nbd_device": "/dev/nbd0", 00:07:37.837 "bdev_name": "raid" 00:07:37.837 } 00:07:37.837 ]' 00:07:37.837 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:37.837 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:37.837 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:37.837 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:37.837 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:07:37.837 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:07:37.837 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:07:37.837 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:37.837 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:37.837 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:37.837 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:37.837 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:37.837 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:37.837 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:37.837 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:37.837 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:37.837 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:37.837 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:37.837 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:37.837 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:37.837 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:37.837 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:37.837 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:37.837 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:37.837 03:10:20 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:37.837 4096+0 records in 00:07:37.837 4096+0 records out 00:07:37.837 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.034909 s, 60.1 MB/s 00:07:37.837 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:38.096 4096+0 records in 00:07:38.096 4096+0 records out 00:07:38.096 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.214815 s, 9.8 MB/s 00:07:38.096 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:38.096 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:38.096 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:38.096 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:38.096 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:38.096 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:38.096 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:38.096 128+0 records in 00:07:38.096 128+0 records out 00:07:38.096 65536 bytes (66 kB, 64 KiB) copied, 0.000352294 s, 186 MB/s 00:07:38.096 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:38.096 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:38.096 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:38.096 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:38.096 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:38.096 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:38.096 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:38.096 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:38.096 2035+0 records in 00:07:38.096 2035+0 records out 00:07:38.096 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0106453 s, 97.9 MB/s 00:07:38.096 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:38.097 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:38.097 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:38.097 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:38.097 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:38.097 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:38.097 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:38.097 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:38.097 456+0 records in 00:07:38.097 456+0 records out 00:07:38.097 233472 bytes (233 kB, 228 KiB) copied, 0.00235335 s, 99.2 MB/s 00:07:38.097 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:38.097 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:38.097 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:38.097 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:38.097 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:38.097 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:07:38.097 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:38.097 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:38.097 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:38.097 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:38.097 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:07:38.097 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:38.097 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:38.357 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:38.357 [2024-10-09 03:10:21.543411] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:38.357 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:38.357 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:38.357 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:38.357 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:38.357 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:38.357 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:07:38.357 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:07:38.357 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:38.357 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:38.357 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:38.616 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:38.616 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:38.616 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:38.616 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:38.616 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:38.616 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:07:38.616 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:07:38.616 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:07:38.616 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:07:38.616 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:07:38.616 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:38.616 03:10:21 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60405 00:07:38.616 03:10:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@950 -- # '[' -z 60405 ']' 00:07:38.616 03:10:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # kill -0 60405 00:07:38.616 03:10:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # uname 00:07:38.616 03:10:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:38.616 03:10:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60405 00:07:38.616 killing process with pid 60405 00:07:38.616 03:10:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:38.616 03:10:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:38.616 03:10:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60405' 00:07:38.616 03:10:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@969 -- # kill 60405 00:07:38.616 [2024-10-09 03:10:21.831240] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:38.616 03:10:21 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@974 -- # wait 60405 00:07:38.616 [2024-10-09 03:10:21.831381] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:38.616 [2024-10-09 03:10:21.831437] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:38.616 [2024-10-09 03:10:21.831452] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:38.876 [2024-10-09 03:10:22.051783] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:40.257 ************************************ 00:07:40.257 END TEST raid_function_test_raid0 00:07:40.257 ************************************ 00:07:40.257 03:10:23 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:07:40.257 00:07:40.257 real 0m4.034s 00:07:40.257 user 0m4.469s 00:07:40.257 sys 0m1.037s 00:07:40.257 03:10:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:40.257 03:10:23 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:07:40.257 03:10:23 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:07:40.257 03:10:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:40.257 03:10:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:40.257 03:10:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:40.257 ************************************ 00:07:40.257 START TEST raid_function_test_concat 00:07:40.257 ************************************ 00:07:40.257 03:10:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1125 -- # raid_function_test concat 00:07:40.257 03:10:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:07:40.257 03:10:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:07:40.257 03:10:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:07:40.257 03:10:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60530 00:07:40.257 03:10:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:40.257 03:10:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60530' 00:07:40.257 Process raid pid: 60530 00:07:40.257 03:10:23 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60530 00:07:40.257 03:10:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@831 -- # '[' -z 60530 ']' 00:07:40.257 03:10:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.257 03:10:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:40.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.257 03:10:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.257 03:10:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:40.257 03:10:23 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:40.257 [2024-10-09 03:10:23.557813] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:07:40.257 [2024-10-09 03:10:23.557950] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.517 [2024-10-09 03:10:23.723499] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.777 [2024-10-09 03:10:23.983790] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.037 [2024-10-09 03:10:24.224141] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:41.037 [2024-10-09 03:10:24.224199] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:41.296 03:10:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:41.296 03:10:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # return 0 00:07:41.296 03:10:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:07:41.296 03:10:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.296 03:10:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:41.296 Base_1 00:07:41.296 03:10:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.296 03:10:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:07:41.296 03:10:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.296 03:10:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:41.296 Base_2 00:07:41.297 03:10:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.297 03:10:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:07:41.297 03:10:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.297 03:10:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:41.297 [2024-10-09 03:10:24.498743] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:41.297 [2024-10-09 03:10:24.500794] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:41.297 [2024-10-09 03:10:24.500888] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:41.297 [2024-10-09 03:10:24.500902] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:41.297 [2024-10-09 03:10:24.501199] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:41.297 [2024-10-09 03:10:24.501369] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:41.297 [2024-10-09 03:10:24.501385] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:07:41.297 [2024-10-09 03:10:24.501567] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:41.297 03:10:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.297 03:10:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:07:41.297 03:10:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:07:41.297 03:10:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:41.297 03:10:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:41.297 03:10:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:41.297 03:10:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:07:41.297 03:10:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:07:41.297 03:10:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:07:41.297 03:10:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:07:41.297 03:10:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:07:41.297 03:10:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:41.297 03:10:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:07:41.297 03:10:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:41.297 03:10:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:07:41.297 03:10:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:41.297 03:10:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:41.297 03:10:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:07:41.557 [2024-10-09 03:10:24.742399] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:07:41.557 /dev/nbd0 00:07:41.557 03:10:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:41.557 03:10:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:41.557 03:10:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:07:41.557 03:10:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@869 -- # local i 00:07:41.557 03:10:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:07:41.557 03:10:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:07:41.557 03:10:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:07:41.557 03:10:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # break 00:07:41.557 03:10:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:07:41.557 03:10:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:07:41.557 03:10:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:07:41.557 1+0 records in 00:07:41.557 1+0 records out 00:07:41.557 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000418153 s, 9.8 MB/s 00:07:41.557 03:10:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.557 03:10:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@886 -- # size=4096 00:07:41.557 03:10:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:07:41.557 03:10:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:07:41.557 03:10:24 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # return 0 00:07:41.557 03:10:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:41.557 03:10:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:07:41.557 03:10:24 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:07:41.557 03:10:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:41.557 03:10:24 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:41.817 03:10:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:41.817 { 00:07:41.817 "nbd_device": "/dev/nbd0", 00:07:41.817 "bdev_name": "raid" 00:07:41.817 } 00:07:41.817 ]' 00:07:41.817 03:10:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:41.817 { 00:07:41.817 "nbd_device": "/dev/nbd0", 00:07:41.817 "bdev_name": "raid" 00:07:41.817 } 00:07:41.817 ]' 00:07:41.817 03:10:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:41.817 03:10:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:07:41.817 03:10:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:07:41.817 03:10:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:41.817 03:10:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:07:41.817 03:10:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:07:41.817 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:07:41.817 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:07:41.817 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:07:41.817 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:07:41.817 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:07:41.817 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:07:41.817 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:07:41.817 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:07:41.817 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:07:41.817 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:07:41.817 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:07:41.817 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:07:41.817 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:07:41.817 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:07:41.817 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:07:41.817 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:07:41.817 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:07:41.817 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:07:41.817 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:07:41.817 4096+0 records in 00:07:41.817 4096+0 records out 00:07:41.817 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.033832 s, 62.0 MB/s 00:07:41.817 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:07:42.077 4096+0 records in 00:07:42.077 4096+0 records out 00:07:42.077 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.210527 s, 10.0 MB/s 00:07:42.077 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:07:42.077 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:42.077 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:07:42.077 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:42.077 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:07:42.077 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:07:42.077 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:07:42.077 128+0 records in 00:07:42.077 128+0 records out 00:07:42.077 65536 bytes (66 kB, 64 KiB) copied, 0.0011351 s, 57.7 MB/s 00:07:42.077 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:07:42.077 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:42.077 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:42.077 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:42.077 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:42.077 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:07:42.077 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:07:42.077 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:07:42.338 2035+0 records in 00:07:42.338 2035+0 records out 00:07:42.338 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0131371 s, 79.3 MB/s 00:07:42.338 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:07:42.338 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:42.338 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:42.338 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:42.338 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:42.338 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:07:42.338 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:07:42.338 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:07:42.338 456+0 records in 00:07:42.338 456+0 records out 00:07:42.338 233472 bytes (233 kB, 228 KiB) copied, 0.00225478 s, 104 MB/s 00:07:42.338 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:07:42.338 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:07:42.338 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:07:42.338 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:07:42.338 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:07:42.338 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:07:42.338 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:07:42.338 03:10:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:07:42.338 03:10:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:07:42.338 03:10:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:42.338 03:10:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:07:42.338 03:10:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:42.338 03:10:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:07:42.598 03:10:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:42.598 [2024-10-09 03:10:25.657920] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:42.598 03:10:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:42.598 03:10:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:42.598 03:10:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:42.598 03:10:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:42.598 03:10:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:42.598 03:10:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:07:42.598 03:10:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:07:42.598 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:07:42.598 03:10:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:07:42.598 03:10:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:07:42.598 03:10:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:42.598 03:10:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:42.598 03:10:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:42.858 03:10:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:42.858 03:10:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:42.858 03:10:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:42.858 03:10:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:07:42.858 03:10:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:07:42.858 03:10:25 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:42.858 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:07:42.858 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:07:42.858 03:10:25 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60530 00:07:42.858 03:10:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@950 -- # '[' -z 60530 ']' 00:07:42.858 03:10:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # kill -0 60530 00:07:42.858 03:10:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # uname 00:07:42.858 03:10:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:42.858 03:10:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60530 00:07:42.858 03:10:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:42.858 03:10:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:42.858 03:10:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60530' 00:07:42.858 killing process with pid 60530 00:07:42.858 03:10:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@969 -- # kill 60530 00:07:42.858 [2024-10-09 03:10:25.971102] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:42.858 03:10:25 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@974 -- # wait 60530 00:07:42.858 [2024-10-09 03:10:25.971249] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:42.858 [2024-10-09 03:10:25.971308] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:42.858 [2024-10-09 03:10:25.971327] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:07:43.118 [2024-10-09 03:10:26.195144] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:44.500 03:10:27 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:07:44.500 00:07:44.500 real 0m4.089s 00:07:44.500 user 0m4.541s 00:07:44.500 sys 0m1.085s 00:07:44.500 03:10:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:44.500 03:10:27 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:07:44.500 ************************************ 00:07:44.500 END TEST raid_function_test_concat 00:07:44.500 ************************************ 00:07:44.500 03:10:27 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:07:44.500 03:10:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:44.500 03:10:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:44.500 03:10:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:44.500 ************************************ 00:07:44.500 START TEST raid0_resize_test 00:07:44.500 ************************************ 00:07:44.500 03:10:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 0 00:07:44.500 03:10:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:07:44.500 03:10:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:44.500 03:10:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:44.500 03:10:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:44.500 03:10:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:44.500 03:10:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:44.500 03:10:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:44.500 03:10:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:44.500 03:10:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60659 00:07:44.500 Process raid pid: 60659 00:07:44.500 03:10:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:44.500 03:10:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60659' 00:07:44.500 03:10:27 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60659 00:07:44.500 03:10:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@831 -- # '[' -z 60659 ']' 00:07:44.500 03:10:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.500 03:10:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:44.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.500 03:10:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.500 03:10:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:44.500 03:10:27 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.500 [2024-10-09 03:10:27.725812] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:07:44.500 [2024-10-09 03:10:27.725964] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:44.762 [2024-10-09 03:10:27.895290] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.019 [2024-10-09 03:10:28.159649] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.278 [2024-10-09 03:10:28.402129] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.278 [2024-10-09 03:10:28.402177] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.278 03:10:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:45.278 03:10:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # return 0 00:07:45.278 03:10:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:45.278 03:10:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.278 03:10:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.278 Base_1 00:07:45.278 03:10:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.278 03:10:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:45.278 03:10:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.278 03:10:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.537 Base_2 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.537 [2024-10-09 03:10:28.586926] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:45.537 [2024-10-09 03:10:28.588932] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:45.537 [2024-10-09 03:10:28.588990] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:45.537 [2024-10-09 03:10:28.589002] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:45.537 [2024-10-09 03:10:28.589232] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:45.537 [2024-10-09 03:10:28.589362] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:45.537 [2024-10-09 03:10:28.589377] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:45.537 [2024-10-09 03:10:28.589513] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.537 [2024-10-09 03:10:28.594854] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:45.537 [2024-10-09 03:10:28.594882] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:45.537 true 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.537 [2024-10-09 03:10:28.610982] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.537 [2024-10-09 03:10:28.654804] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:45.537 [2024-10-09 03:10:28.654851] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:45.537 [2024-10-09 03:10:28.654885] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:45.537 true 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:45.537 [2024-10-09 03:10:28.666925] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60659 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@950 -- # '[' -z 60659 ']' 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # kill -0 60659 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # uname 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60659 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:45.537 killing process with pid 60659 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60659' 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@969 -- # kill 60659 00:07:45.537 [2024-10-09 03:10:28.753924] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:45.537 [2024-10-09 03:10:28.754033] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:45.537 [2024-10-09 03:10:28.754088] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:45.537 03:10:28 bdev_raid.raid0_resize_test -- common/autotest_common.sh@974 -- # wait 60659 00:07:45.537 [2024-10-09 03:10:28.754098] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:45.537 [2024-10-09 03:10:28.772247] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:46.915 03:10:30 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:46.915 00:07:46.915 real 0m2.514s 00:07:46.915 user 0m2.518s 00:07:46.915 sys 0m0.464s 00:07:46.915 03:10:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:46.915 03:10:30 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:46.915 ************************************ 00:07:46.915 END TEST raid0_resize_test 00:07:46.915 ************************************ 00:07:46.915 03:10:30 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:07:46.915 03:10:30 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:46.915 03:10:30 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:46.915 03:10:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:46.915 ************************************ 00:07:46.915 START TEST raid1_resize_test 00:07:46.915 ************************************ 00:07:46.915 03:10:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1125 -- # raid_resize_test 1 00:07:46.915 03:10:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:07:46.915 03:10:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:07:46.915 03:10:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:07:46.915 03:10:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:07:46.915 03:10:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:07:46.915 03:10:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:07:46.915 03:10:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:07:46.915 03:10:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:07:46.915 03:10:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60720 00:07:46.915 Process raid pid: 60720 00:07:46.915 03:10:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:46.915 03:10:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60720' 00:07:46.915 03:10:30 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60720 00:07:46.915 03:10:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@831 -- # '[' -z 60720 ']' 00:07:46.915 03:10:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:46.915 03:10:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:46.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:46.915 03:10:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:46.915 03:10:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:46.915 03:10:30 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.174 [2024-10-09 03:10:30.301878] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:07:47.174 [2024-10-09 03:10:30.302009] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:47.174 [2024-10-09 03:10:30.472944] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:47.433 [2024-10-09 03:10:30.729151] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.691 [2024-10-09 03:10:30.974634] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:47.691 [2024-10-09 03:10:30.974681] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:47.950 03:10:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:47.950 03:10:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # return 0 00:07:47.950 03:10:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:07:47.950 03:10:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.950 03:10:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.950 Base_1 00:07:47.950 03:10:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.950 03:10:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:07:47.950 03:10:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.950 03:10:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.950 Base_2 00:07:47.950 03:10:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.950 03:10:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:07:47.950 03:10:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:07:47.950 03:10:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.950 03:10:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.950 [2024-10-09 03:10:31.175345] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:47.950 [2024-10-09 03:10:31.177418] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:47.950 [2024-10-09 03:10:31.177481] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:07:47.950 [2024-10-09 03:10:31.177493] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:07:47.950 [2024-10-09 03:10:31.177736] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:07:47.950 [2024-10-09 03:10:31.177889] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:07:47.950 [2024-10-09 03:10:31.177901] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:07:47.950 [2024-10-09 03:10:31.178056] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:47.950 03:10:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.950 03:10:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:07:47.950 03:10:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.950 03:10:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.950 [2024-10-09 03:10:31.187271] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:47.950 [2024-10-09 03:10:31.187301] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:47.950 true 00:07:47.950 03:10:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.950 03:10:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:47.950 03:10:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.951 03:10:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:07:47.951 03:10:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.951 [2024-10-09 03:10:31.203370] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:47.951 03:10:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.951 03:10:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:07:47.951 03:10:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:07:47.951 03:10:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:07:47.951 03:10:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:07:47.951 03:10:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:07:47.951 03:10:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:07:47.951 03:10:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.951 03:10:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.951 [2024-10-09 03:10:31.243196] bdev_raid.c:2313:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:47.951 [2024-10-09 03:10:31.243227] bdev_raid.c:2326:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:47.951 [2024-10-09 03:10:31.243259] bdev_raid.c:2340:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:07:47.951 true 00:07:47.951 03:10:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:47.951 03:10:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:07:47.951 03:10:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:47.951 03:10:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:47.951 03:10:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:07:48.209 [2024-10-09 03:10:31.255298] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:48.209 03:10:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:48.209 03:10:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:07:48.209 03:10:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:07:48.209 03:10:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:07:48.209 03:10:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:07:48.209 03:10:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:07:48.209 03:10:31 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60720 00:07:48.209 03:10:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@950 -- # '[' -z 60720 ']' 00:07:48.209 03:10:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # kill -0 60720 00:07:48.209 03:10:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # uname 00:07:48.209 03:10:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:48.209 03:10:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60720 00:07:48.209 03:10:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:48.209 03:10:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:48.209 killing process with pid 60720 00:07:48.209 03:10:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60720' 00:07:48.209 03:10:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@969 -- # kill 60720 00:07:48.209 [2024-10-09 03:10:31.341143] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:48.209 [2024-10-09 03:10:31.341245] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:48.209 03:10:31 bdev_raid.raid1_resize_test -- common/autotest_common.sh@974 -- # wait 60720 00:07:48.209 [2024-10-09 03:10:31.341770] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:48.209 [2024-10-09 03:10:31.341791] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:07:48.209 [2024-10-09 03:10:31.359167] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:49.587 03:10:32 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:07:49.587 00:07:49.587 real 0m2.516s 00:07:49.587 user 0m2.531s 00:07:49.587 sys 0m0.442s 00:07:49.587 03:10:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:49.587 03:10:32 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.587 ************************************ 00:07:49.587 END TEST raid1_resize_test 00:07:49.587 ************************************ 00:07:49.587 03:10:32 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:07:49.587 03:10:32 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:07:49.587 03:10:32 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:49.587 03:10:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:49.587 03:10:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.587 03:10:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:49.587 ************************************ 00:07:49.587 START TEST raid_state_function_test 00:07:49.587 ************************************ 00:07:49.587 03:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 false 00:07:49.587 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:49.587 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:49.587 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:07:49.587 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:49.587 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:49.587 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.587 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:49.587 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:49.587 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.587 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:49.587 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:49.587 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:49.587 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:49.587 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:49.587 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:49.587 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:49.587 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:49.587 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:49.587 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:49.587 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:49.587 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:49.587 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:07:49.587 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:07:49.587 Process raid pid: 60783 00:07:49.587 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60783 00:07:49.587 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:49.587 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60783' 00:07:49.587 03:10:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60783 00:07:49.587 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.587 03:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 60783 ']' 00:07:49.587 03:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.587 03:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:49.587 03:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.587 03:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:49.587 03:10:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:49.846 [2024-10-09 03:10:32.892418] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:07:49.846 [2024-10-09 03:10:32.892637] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:49.846 [2024-10-09 03:10:33.058042] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.104 [2024-10-09 03:10:33.320602] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.405 [2024-10-09 03:10:33.560146] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.405 [2024-10-09 03:10:33.560288] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:50.664 03:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:50.664 03:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:07:50.664 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:50.664 03:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.664 03:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.664 [2024-10-09 03:10:33.721621] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:50.664 [2024-10-09 03:10:33.721770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:50.664 [2024-10-09 03:10:33.721805] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:50.664 [2024-10-09 03:10:33.721828] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:50.664 03:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.664 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:50.664 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:50.664 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:50.664 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:50.664 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:50.664 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:50.664 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:50.664 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:50.664 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:50.665 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:50.665 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:50.665 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:50.665 03:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.665 03:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.665 03:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.665 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:50.665 "name": "Existed_Raid", 00:07:50.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.665 "strip_size_kb": 64, 00:07:50.665 "state": "configuring", 00:07:50.665 "raid_level": "raid0", 00:07:50.665 "superblock": false, 00:07:50.665 "num_base_bdevs": 2, 00:07:50.665 "num_base_bdevs_discovered": 0, 00:07:50.665 "num_base_bdevs_operational": 2, 00:07:50.665 "base_bdevs_list": [ 00:07:50.665 { 00:07:50.665 "name": "BaseBdev1", 00:07:50.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.665 "is_configured": false, 00:07:50.665 "data_offset": 0, 00:07:50.665 "data_size": 0 00:07:50.665 }, 00:07:50.665 { 00:07:50.665 "name": "BaseBdev2", 00:07:50.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:50.665 "is_configured": false, 00:07:50.665 "data_offset": 0, 00:07:50.665 "data_size": 0 00:07:50.665 } 00:07:50.665 ] 00:07:50.665 }' 00:07:50.665 03:10:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:50.665 03:10:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.924 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:50.924 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.924 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.924 [2024-10-09 03:10:34.160833] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:50.924 [2024-10-09 03:10:34.161006] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:50.924 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.924 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:50.924 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.924 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.924 [2024-10-09 03:10:34.168790] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:50.924 [2024-10-09 03:10:34.168852] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:50.924 [2024-10-09 03:10:34.168863] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:50.924 [2024-10-09 03:10:34.168876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:50.924 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.924 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:50.924 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.924 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.183 [2024-10-09 03:10:34.230428] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:51.183 BaseBdev1 00:07:51.183 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.183 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:51.183 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:51.183 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:51.183 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:51.183 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:51.183 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:51.183 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:51.183 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.183 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.183 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.183 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:51.183 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.183 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.183 [ 00:07:51.183 { 00:07:51.183 "name": "BaseBdev1", 00:07:51.183 "aliases": [ 00:07:51.183 "c4875278-e096-429c-8773-990a44c4b934" 00:07:51.183 ], 00:07:51.183 "product_name": "Malloc disk", 00:07:51.183 "block_size": 512, 00:07:51.183 "num_blocks": 65536, 00:07:51.183 "uuid": "c4875278-e096-429c-8773-990a44c4b934", 00:07:51.183 "assigned_rate_limits": { 00:07:51.183 "rw_ios_per_sec": 0, 00:07:51.183 "rw_mbytes_per_sec": 0, 00:07:51.183 "r_mbytes_per_sec": 0, 00:07:51.183 "w_mbytes_per_sec": 0 00:07:51.183 }, 00:07:51.183 "claimed": true, 00:07:51.183 "claim_type": "exclusive_write", 00:07:51.183 "zoned": false, 00:07:51.183 "supported_io_types": { 00:07:51.183 "read": true, 00:07:51.183 "write": true, 00:07:51.183 "unmap": true, 00:07:51.183 "flush": true, 00:07:51.183 "reset": true, 00:07:51.183 "nvme_admin": false, 00:07:51.183 "nvme_io": false, 00:07:51.183 "nvme_io_md": false, 00:07:51.183 "write_zeroes": true, 00:07:51.183 "zcopy": true, 00:07:51.183 "get_zone_info": false, 00:07:51.183 "zone_management": false, 00:07:51.183 "zone_append": false, 00:07:51.183 "compare": false, 00:07:51.183 "compare_and_write": false, 00:07:51.183 "abort": true, 00:07:51.183 "seek_hole": false, 00:07:51.183 "seek_data": false, 00:07:51.183 "copy": true, 00:07:51.183 "nvme_iov_md": false 00:07:51.183 }, 00:07:51.183 "memory_domains": [ 00:07:51.183 { 00:07:51.183 "dma_device_id": "system", 00:07:51.183 "dma_device_type": 1 00:07:51.183 }, 00:07:51.183 { 00:07:51.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.183 "dma_device_type": 2 00:07:51.183 } 00:07:51.183 ], 00:07:51.183 "driver_specific": {} 00:07:51.183 } 00:07:51.183 ] 00:07:51.183 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.183 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:51.183 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:51.183 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.183 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:51.183 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:51.183 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.183 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.183 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.183 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.183 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.183 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.183 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.183 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.183 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.183 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.183 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.183 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.183 "name": "Existed_Raid", 00:07:51.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.183 "strip_size_kb": 64, 00:07:51.183 "state": "configuring", 00:07:51.183 "raid_level": "raid0", 00:07:51.183 "superblock": false, 00:07:51.183 "num_base_bdevs": 2, 00:07:51.183 "num_base_bdevs_discovered": 1, 00:07:51.183 "num_base_bdevs_operational": 2, 00:07:51.183 "base_bdevs_list": [ 00:07:51.183 { 00:07:51.183 "name": "BaseBdev1", 00:07:51.183 "uuid": "c4875278-e096-429c-8773-990a44c4b934", 00:07:51.183 "is_configured": true, 00:07:51.183 "data_offset": 0, 00:07:51.183 "data_size": 65536 00:07:51.183 }, 00:07:51.183 { 00:07:51.183 "name": "BaseBdev2", 00:07:51.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.183 "is_configured": false, 00:07:51.183 "data_offset": 0, 00:07:51.183 "data_size": 0 00:07:51.183 } 00:07:51.183 ] 00:07:51.183 }' 00:07:51.183 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.183 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.442 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:51.442 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.442 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.442 [2024-10-09 03:10:34.709806] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:51.442 [2024-10-09 03:10:34.709977] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:51.442 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.442 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:51.442 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.442 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.442 [2024-10-09 03:10:34.721792] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:51.442 [2024-10-09 03:10:34.723932] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:51.442 [2024-10-09 03:10:34.724010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:51.442 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.442 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:51.442 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:51.442 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:51.442 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.442 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:51.442 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:51.442 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.442 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.442 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.442 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.442 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.442 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:51.442 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:51.442 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.442 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.442 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.701 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.701 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:51.701 "name": "Existed_Raid", 00:07:51.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.701 "strip_size_kb": 64, 00:07:51.701 "state": "configuring", 00:07:51.701 "raid_level": "raid0", 00:07:51.701 "superblock": false, 00:07:51.701 "num_base_bdevs": 2, 00:07:51.701 "num_base_bdevs_discovered": 1, 00:07:51.701 "num_base_bdevs_operational": 2, 00:07:51.701 "base_bdevs_list": [ 00:07:51.701 { 00:07:51.701 "name": "BaseBdev1", 00:07:51.701 "uuid": "c4875278-e096-429c-8773-990a44c4b934", 00:07:51.701 "is_configured": true, 00:07:51.701 "data_offset": 0, 00:07:51.701 "data_size": 65536 00:07:51.701 }, 00:07:51.701 { 00:07:51.701 "name": "BaseBdev2", 00:07:51.701 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.701 "is_configured": false, 00:07:51.701 "data_offset": 0, 00:07:51.701 "data_size": 0 00:07:51.701 } 00:07:51.701 ] 00:07:51.701 }' 00:07:51.701 03:10:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:51.701 03:10:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.961 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:51.961 03:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.961 03:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.961 [2024-10-09 03:10:35.216534] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:51.961 [2024-10-09 03:10:35.216694] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:51.961 [2024-10-09 03:10:35.216725] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:51.961 [2024-10-09 03:10:35.217091] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:51.961 [2024-10-09 03:10:35.217315] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:51.961 [2024-10-09 03:10:35.217368] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:51.961 [2024-10-09 03:10:35.217699] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:51.961 BaseBdev2 00:07:51.961 03:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.961 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:51.961 03:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:51.961 03:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:51.961 03:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:07:51.961 03:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:51.961 03:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:51.961 03:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:51.961 03:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.961 03:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.961 03:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.961 03:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:51.961 03:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:51.961 03:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:51.961 [ 00:07:51.961 { 00:07:51.961 "name": "BaseBdev2", 00:07:51.961 "aliases": [ 00:07:51.961 "6e71f464-756e-48d3-a4d2-5a4a207d65e5" 00:07:51.961 ], 00:07:51.961 "product_name": "Malloc disk", 00:07:51.961 "block_size": 512, 00:07:51.961 "num_blocks": 65536, 00:07:51.961 "uuid": "6e71f464-756e-48d3-a4d2-5a4a207d65e5", 00:07:51.961 "assigned_rate_limits": { 00:07:51.961 "rw_ios_per_sec": 0, 00:07:51.961 "rw_mbytes_per_sec": 0, 00:07:51.961 "r_mbytes_per_sec": 0, 00:07:51.961 "w_mbytes_per_sec": 0 00:07:51.961 }, 00:07:51.961 "claimed": true, 00:07:51.961 "claim_type": "exclusive_write", 00:07:51.961 "zoned": false, 00:07:51.961 "supported_io_types": { 00:07:51.961 "read": true, 00:07:51.961 "write": true, 00:07:51.961 "unmap": true, 00:07:51.961 "flush": true, 00:07:51.961 "reset": true, 00:07:51.961 "nvme_admin": false, 00:07:51.961 "nvme_io": false, 00:07:51.961 "nvme_io_md": false, 00:07:51.961 "write_zeroes": true, 00:07:51.961 "zcopy": true, 00:07:51.961 "get_zone_info": false, 00:07:51.961 "zone_management": false, 00:07:51.961 "zone_append": false, 00:07:51.961 "compare": false, 00:07:51.961 "compare_and_write": false, 00:07:51.961 "abort": true, 00:07:51.961 "seek_hole": false, 00:07:51.961 "seek_data": false, 00:07:51.961 "copy": true, 00:07:51.961 "nvme_iov_md": false 00:07:51.961 }, 00:07:51.961 "memory_domains": [ 00:07:51.961 { 00:07:51.961 "dma_device_id": "system", 00:07:51.961 "dma_device_type": 1 00:07:51.961 }, 00:07:51.961 { 00:07:51.961 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:51.961 "dma_device_type": 2 00:07:51.961 } 00:07:51.961 ], 00:07:51.961 "driver_specific": {} 00:07:51.961 } 00:07:51.961 ] 00:07:51.961 03:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:51.961 03:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:07:51.961 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:51.961 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:51.961 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:51.961 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:51.961 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:51.961 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:51.961 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:51.961 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:51.961 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:51.961 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:51.961 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:51.961 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.220 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.220 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.220 03:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.220 03:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.220 03:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.220 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.220 "name": "Existed_Raid", 00:07:52.220 "uuid": "d6b2920f-a357-4507-ba9c-2eef62d7a5e7", 00:07:52.220 "strip_size_kb": 64, 00:07:52.220 "state": "online", 00:07:52.220 "raid_level": "raid0", 00:07:52.220 "superblock": false, 00:07:52.220 "num_base_bdevs": 2, 00:07:52.220 "num_base_bdevs_discovered": 2, 00:07:52.220 "num_base_bdevs_operational": 2, 00:07:52.220 "base_bdevs_list": [ 00:07:52.220 { 00:07:52.220 "name": "BaseBdev1", 00:07:52.220 "uuid": "c4875278-e096-429c-8773-990a44c4b934", 00:07:52.220 "is_configured": true, 00:07:52.220 "data_offset": 0, 00:07:52.220 "data_size": 65536 00:07:52.220 }, 00:07:52.220 { 00:07:52.220 "name": "BaseBdev2", 00:07:52.220 "uuid": "6e71f464-756e-48d3-a4d2-5a4a207d65e5", 00:07:52.220 "is_configured": true, 00:07:52.220 "data_offset": 0, 00:07:52.220 "data_size": 65536 00:07:52.220 } 00:07:52.220 ] 00:07:52.220 }' 00:07:52.220 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.220 03:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.478 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:52.478 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:52.478 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:52.478 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:52.478 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:07:52.478 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:52.478 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:52.478 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:52.478 03:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.478 03:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.479 [2024-10-09 03:10:35.704081] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:52.479 03:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.479 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:52.479 "name": "Existed_Raid", 00:07:52.479 "aliases": [ 00:07:52.479 "d6b2920f-a357-4507-ba9c-2eef62d7a5e7" 00:07:52.479 ], 00:07:52.479 "product_name": "Raid Volume", 00:07:52.479 "block_size": 512, 00:07:52.479 "num_blocks": 131072, 00:07:52.479 "uuid": "d6b2920f-a357-4507-ba9c-2eef62d7a5e7", 00:07:52.479 "assigned_rate_limits": { 00:07:52.479 "rw_ios_per_sec": 0, 00:07:52.479 "rw_mbytes_per_sec": 0, 00:07:52.479 "r_mbytes_per_sec": 0, 00:07:52.479 "w_mbytes_per_sec": 0 00:07:52.479 }, 00:07:52.479 "claimed": false, 00:07:52.479 "zoned": false, 00:07:52.479 "supported_io_types": { 00:07:52.479 "read": true, 00:07:52.479 "write": true, 00:07:52.479 "unmap": true, 00:07:52.479 "flush": true, 00:07:52.479 "reset": true, 00:07:52.479 "nvme_admin": false, 00:07:52.479 "nvme_io": false, 00:07:52.479 "nvme_io_md": false, 00:07:52.479 "write_zeroes": true, 00:07:52.479 "zcopy": false, 00:07:52.479 "get_zone_info": false, 00:07:52.479 "zone_management": false, 00:07:52.479 "zone_append": false, 00:07:52.479 "compare": false, 00:07:52.479 "compare_and_write": false, 00:07:52.479 "abort": false, 00:07:52.479 "seek_hole": false, 00:07:52.479 "seek_data": false, 00:07:52.479 "copy": false, 00:07:52.479 "nvme_iov_md": false 00:07:52.479 }, 00:07:52.479 "memory_domains": [ 00:07:52.479 { 00:07:52.479 "dma_device_id": "system", 00:07:52.479 "dma_device_type": 1 00:07:52.479 }, 00:07:52.479 { 00:07:52.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.479 "dma_device_type": 2 00:07:52.479 }, 00:07:52.479 { 00:07:52.479 "dma_device_id": "system", 00:07:52.479 "dma_device_type": 1 00:07:52.479 }, 00:07:52.479 { 00:07:52.479 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:52.479 "dma_device_type": 2 00:07:52.479 } 00:07:52.479 ], 00:07:52.479 "driver_specific": { 00:07:52.479 "raid": { 00:07:52.479 "uuid": "d6b2920f-a357-4507-ba9c-2eef62d7a5e7", 00:07:52.479 "strip_size_kb": 64, 00:07:52.479 "state": "online", 00:07:52.479 "raid_level": "raid0", 00:07:52.479 "superblock": false, 00:07:52.479 "num_base_bdevs": 2, 00:07:52.479 "num_base_bdevs_discovered": 2, 00:07:52.479 "num_base_bdevs_operational": 2, 00:07:52.479 "base_bdevs_list": [ 00:07:52.479 { 00:07:52.479 "name": "BaseBdev1", 00:07:52.479 "uuid": "c4875278-e096-429c-8773-990a44c4b934", 00:07:52.479 "is_configured": true, 00:07:52.479 "data_offset": 0, 00:07:52.479 "data_size": 65536 00:07:52.479 }, 00:07:52.479 { 00:07:52.479 "name": "BaseBdev2", 00:07:52.479 "uuid": "6e71f464-756e-48d3-a4d2-5a4a207d65e5", 00:07:52.479 "is_configured": true, 00:07:52.479 "data_offset": 0, 00:07:52.479 "data_size": 65536 00:07:52.479 } 00:07:52.479 ] 00:07:52.479 } 00:07:52.479 } 00:07:52.479 }' 00:07:52.479 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:52.737 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:52.737 BaseBdev2' 00:07:52.737 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.737 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:52.737 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.737 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:52.737 03:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.737 03:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.737 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.737 03:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.737 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:52.737 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:52.737 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:52.737 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:52.737 03:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.737 03:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.737 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:52.737 03:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.737 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:52.737 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:52.737 03:10:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:52.737 03:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.737 03:10:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.737 [2024-10-09 03:10:35.947396] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:52.737 [2024-10-09 03:10:35.947452] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:52.737 [2024-10-09 03:10:35.947517] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:52.996 03:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.996 03:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:52.996 03:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:52.996 03:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:52.996 03:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:07:52.996 03:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:52.996 03:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:52.996 03:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:52.996 03:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:52.996 03:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:52.996 03:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:52.996 03:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:52.996 03:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:52.996 03:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:52.996 03:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:52.996 03:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:52.996 03:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:52.996 03:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:52.996 03:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:52.996 03:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.996 03:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:52.996 03:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:52.996 "name": "Existed_Raid", 00:07:52.996 "uuid": "d6b2920f-a357-4507-ba9c-2eef62d7a5e7", 00:07:52.996 "strip_size_kb": 64, 00:07:52.996 "state": "offline", 00:07:52.996 "raid_level": "raid0", 00:07:52.996 "superblock": false, 00:07:52.996 "num_base_bdevs": 2, 00:07:52.996 "num_base_bdevs_discovered": 1, 00:07:52.996 "num_base_bdevs_operational": 1, 00:07:52.996 "base_bdevs_list": [ 00:07:52.996 { 00:07:52.996 "name": null, 00:07:52.996 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:52.996 "is_configured": false, 00:07:52.996 "data_offset": 0, 00:07:52.996 "data_size": 65536 00:07:52.996 }, 00:07:52.996 { 00:07:52.996 "name": "BaseBdev2", 00:07:52.996 "uuid": "6e71f464-756e-48d3-a4d2-5a4a207d65e5", 00:07:52.996 "is_configured": true, 00:07:52.996 "data_offset": 0, 00:07:52.996 "data_size": 65536 00:07:52.996 } 00:07:52.996 ] 00:07:52.996 }' 00:07:52.996 03:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:52.996 03:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.255 03:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:53.255 03:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:53.255 03:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.255 03:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.255 03:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.255 03:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:53.255 03:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.255 03:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:53.255 03:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:53.255 03:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:53.255 03:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.255 03:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.255 [2024-10-09 03:10:36.520107] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:53.255 [2024-10-09 03:10:36.520193] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:53.514 03:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.514 03:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:53.514 03:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:53.514 03:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:53.514 03:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:53.514 03:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.514 03:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.514 03:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.514 03:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:53.514 03:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:53.514 03:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:53.514 03:10:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60783 00:07:53.514 03:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 60783 ']' 00:07:53.514 03:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 60783 00:07:53.514 03:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:07:53.514 03:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:53.514 03:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 60783 00:07:53.514 03:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:53.514 killing process with pid 60783 00:07:53.514 03:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:53.514 03:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 60783' 00:07:53.514 03:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 60783 00:07:53.514 [2024-10-09 03:10:36.721046] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:53.514 03:10:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 60783 00:07:53.514 [2024-10-09 03:10:36.738185] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:54.890 03:10:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:07:54.890 00:07:54.891 real 0m5.310s 00:07:54.891 user 0m7.414s 00:07:54.891 sys 0m0.917s 00:07:54.891 03:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:54.891 ************************************ 00:07:54.891 END TEST raid_state_function_test 00:07:54.891 ************************************ 00:07:54.891 03:10:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:54.891 03:10:38 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:54.891 03:10:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:07:54.891 03:10:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:54.891 03:10:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:54.891 ************************************ 00:07:54.891 START TEST raid_state_function_test_sb 00:07:54.891 ************************************ 00:07:54.891 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 2 true 00:07:54.891 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:07:54.891 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:07:54.891 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:07:54.891 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:07:54.891 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:07:54.891 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:54.891 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:07:54.891 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:54.891 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:54.891 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:07:54.891 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:07:54.891 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:07:54.891 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:54.891 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:07:54.891 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:07:54.891 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:07:54.891 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:07:54.891 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:07:54.891 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:07:54.891 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:07:54.891 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:07:54.891 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:07:54.891 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:07:54.891 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61036 00:07:54.891 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:07:54.891 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61036' 00:07:54.891 Process raid pid: 61036 00:07:54.891 03:10:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61036 00:07:54.891 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 61036 ']' 00:07:54.891 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:54.891 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:54.891 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:54.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:54.891 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:54.891 03:10:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.149 [2024-10-09 03:10:38.264336] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:07:55.149 [2024-10-09 03:10:38.264545] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:55.149 [2024-10-09 03:10:38.427682] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:55.408 [2024-10-09 03:10:38.688936] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:07:55.666 [2024-10-09 03:10:38.931118] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:55.666 [2024-10-09 03:10:38.931246] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:55.925 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:55.925 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:07:55.925 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:55.925 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.925 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.925 [2024-10-09 03:10:39.097445] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:55.925 [2024-10-09 03:10:39.097512] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:55.925 [2024-10-09 03:10:39.097526] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:55.925 [2024-10-09 03:10:39.097538] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:55.925 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.925 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:55.925 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:55.925 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:55.925 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:55.925 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:55.925 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:55.925 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:55.925 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:55.925 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:55.925 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:55.925 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:55.925 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:55.925 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.925 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:55.925 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.925 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:55.925 "name": "Existed_Raid", 00:07:55.925 "uuid": "521ddf4a-0f1f-41a7-9e53-d0cba4986177", 00:07:55.925 "strip_size_kb": 64, 00:07:55.925 "state": "configuring", 00:07:55.925 "raid_level": "raid0", 00:07:55.925 "superblock": true, 00:07:55.925 "num_base_bdevs": 2, 00:07:55.925 "num_base_bdevs_discovered": 0, 00:07:55.925 "num_base_bdevs_operational": 2, 00:07:55.925 "base_bdevs_list": [ 00:07:55.925 { 00:07:55.925 "name": "BaseBdev1", 00:07:55.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.925 "is_configured": false, 00:07:55.925 "data_offset": 0, 00:07:55.925 "data_size": 0 00:07:55.925 }, 00:07:55.925 { 00:07:55.925 "name": "BaseBdev2", 00:07:55.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:55.925 "is_configured": false, 00:07:55.925 "data_offset": 0, 00:07:55.925 "data_size": 0 00:07:55.925 } 00:07:55.925 ] 00:07:55.925 }' 00:07:55.925 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:55.925 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.492 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:56.492 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.492 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.492 [2024-10-09 03:10:39.552622] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:56.492 [2024-10-09 03:10:39.552781] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:07:56.492 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.492 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:56.492 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.492 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.492 [2024-10-09 03:10:39.560584] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:56.492 [2024-10-09 03:10:39.560683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:56.492 [2024-10-09 03:10:39.560710] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:56.492 [2024-10-09 03:10:39.560736] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:56.492 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.492 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:07:56.492 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.492 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.492 [2024-10-09 03:10:39.629590] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:56.492 BaseBdev1 00:07:56.492 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.492 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:07:56.492 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:07:56.492 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:56.492 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:56.492 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:56.492 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:56.492 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:56.492 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.492 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.492 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.492 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:56.492 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.493 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.493 [ 00:07:56.493 { 00:07:56.493 "name": "BaseBdev1", 00:07:56.493 "aliases": [ 00:07:56.493 "14567d3b-bfe5-4dfa-bf10-1ff58a27632d" 00:07:56.493 ], 00:07:56.493 "product_name": "Malloc disk", 00:07:56.493 "block_size": 512, 00:07:56.493 "num_blocks": 65536, 00:07:56.493 "uuid": "14567d3b-bfe5-4dfa-bf10-1ff58a27632d", 00:07:56.493 "assigned_rate_limits": { 00:07:56.493 "rw_ios_per_sec": 0, 00:07:56.493 "rw_mbytes_per_sec": 0, 00:07:56.493 "r_mbytes_per_sec": 0, 00:07:56.493 "w_mbytes_per_sec": 0 00:07:56.493 }, 00:07:56.493 "claimed": true, 00:07:56.493 "claim_type": "exclusive_write", 00:07:56.493 "zoned": false, 00:07:56.493 "supported_io_types": { 00:07:56.493 "read": true, 00:07:56.493 "write": true, 00:07:56.493 "unmap": true, 00:07:56.493 "flush": true, 00:07:56.493 "reset": true, 00:07:56.493 "nvme_admin": false, 00:07:56.493 "nvme_io": false, 00:07:56.493 "nvme_io_md": false, 00:07:56.493 "write_zeroes": true, 00:07:56.493 "zcopy": true, 00:07:56.493 "get_zone_info": false, 00:07:56.493 "zone_management": false, 00:07:56.493 "zone_append": false, 00:07:56.493 "compare": false, 00:07:56.493 "compare_and_write": false, 00:07:56.493 "abort": true, 00:07:56.493 "seek_hole": false, 00:07:56.493 "seek_data": false, 00:07:56.493 "copy": true, 00:07:56.493 "nvme_iov_md": false 00:07:56.493 }, 00:07:56.493 "memory_domains": [ 00:07:56.493 { 00:07:56.493 "dma_device_id": "system", 00:07:56.493 "dma_device_type": 1 00:07:56.493 }, 00:07:56.493 { 00:07:56.493 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.493 "dma_device_type": 2 00:07:56.493 } 00:07:56.493 ], 00:07:56.493 "driver_specific": {} 00:07:56.493 } 00:07:56.493 ] 00:07:56.493 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.493 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:56.493 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:56.493 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:56.493 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:56.493 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:56.493 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:56.493 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:56.493 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:56.493 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:56.493 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:56.493 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:56.493 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:56.493 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:56.493 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:56.493 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:56.493 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:56.493 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:56.493 "name": "Existed_Raid", 00:07:56.493 "uuid": "eeccba00-8083-4c44-8b05-dde4788ac531", 00:07:56.493 "strip_size_kb": 64, 00:07:56.493 "state": "configuring", 00:07:56.493 "raid_level": "raid0", 00:07:56.493 "superblock": true, 00:07:56.493 "num_base_bdevs": 2, 00:07:56.493 "num_base_bdevs_discovered": 1, 00:07:56.493 "num_base_bdevs_operational": 2, 00:07:56.493 "base_bdevs_list": [ 00:07:56.493 { 00:07:56.493 "name": "BaseBdev1", 00:07:56.493 "uuid": "14567d3b-bfe5-4dfa-bf10-1ff58a27632d", 00:07:56.493 "is_configured": true, 00:07:56.493 "data_offset": 2048, 00:07:56.493 "data_size": 63488 00:07:56.493 }, 00:07:56.493 { 00:07:56.493 "name": "BaseBdev2", 00:07:56.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:56.493 "is_configured": false, 00:07:56.493 "data_offset": 0, 00:07:56.493 "data_size": 0 00:07:56.493 } 00:07:56.493 ] 00:07:56.493 }' 00:07:56.493 03:10:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:56.493 03:10:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.062 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:07:57.062 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.062 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.062 [2024-10-09 03:10:40.088913] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:57.062 [2024-10-09 03:10:40.088999] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:07:57.062 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.062 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:07:57.062 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.062 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.062 [2024-10-09 03:10:40.100911] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:57.062 [2024-10-09 03:10:40.103126] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:57.062 [2024-10-09 03:10:40.103175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:57.062 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.062 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:07:57.062 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:57.062 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:57.062 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.062 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:07:57.062 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:57.062 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.062 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.062 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.062 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.062 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.062 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.062 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.062 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.062 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.062 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.062 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.062 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.062 "name": "Existed_Raid", 00:07:57.062 "uuid": "2cf45f81-d801-47ad-8d78-b71cebc163ca", 00:07:57.062 "strip_size_kb": 64, 00:07:57.063 "state": "configuring", 00:07:57.063 "raid_level": "raid0", 00:07:57.063 "superblock": true, 00:07:57.063 "num_base_bdevs": 2, 00:07:57.063 "num_base_bdevs_discovered": 1, 00:07:57.063 "num_base_bdevs_operational": 2, 00:07:57.063 "base_bdevs_list": [ 00:07:57.063 { 00:07:57.063 "name": "BaseBdev1", 00:07:57.063 "uuid": "14567d3b-bfe5-4dfa-bf10-1ff58a27632d", 00:07:57.063 "is_configured": true, 00:07:57.063 "data_offset": 2048, 00:07:57.063 "data_size": 63488 00:07:57.063 }, 00:07:57.063 { 00:07:57.063 "name": "BaseBdev2", 00:07:57.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.063 "is_configured": false, 00:07:57.063 "data_offset": 0, 00:07:57.063 "data_size": 0 00:07:57.063 } 00:07:57.063 ] 00:07:57.063 }' 00:07:57.063 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.063 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.322 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:07:57.322 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.322 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.322 [2024-10-09 03:10:40.607981] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:57.322 [2024-10-09 03:10:40.608384] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:07:57.322 [2024-10-09 03:10:40.608441] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:57.322 [2024-10-09 03:10:40.608765] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:07:57.322 [2024-10-09 03:10:40.608984] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:07:57.322 [2024-10-09 03:10:40.609045] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:07:57.322 BaseBdev2 00:07:57.322 [2024-10-09 03:10:40.609249] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:57.322 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.322 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:07:57.322 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:07:57.322 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:57.322 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:07:57.322 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:57.322 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:57.322 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:07:57.322 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.322 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.580 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.580 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:57.581 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.581 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.581 [ 00:07:57.581 { 00:07:57.581 "name": "BaseBdev2", 00:07:57.581 "aliases": [ 00:07:57.581 "bf1f62d1-b24f-447c-a33e-f1df4a1bc2e2" 00:07:57.581 ], 00:07:57.581 "product_name": "Malloc disk", 00:07:57.581 "block_size": 512, 00:07:57.581 "num_blocks": 65536, 00:07:57.581 "uuid": "bf1f62d1-b24f-447c-a33e-f1df4a1bc2e2", 00:07:57.581 "assigned_rate_limits": { 00:07:57.581 "rw_ios_per_sec": 0, 00:07:57.581 "rw_mbytes_per_sec": 0, 00:07:57.581 "r_mbytes_per_sec": 0, 00:07:57.581 "w_mbytes_per_sec": 0 00:07:57.581 }, 00:07:57.581 "claimed": true, 00:07:57.581 "claim_type": "exclusive_write", 00:07:57.581 "zoned": false, 00:07:57.581 "supported_io_types": { 00:07:57.581 "read": true, 00:07:57.581 "write": true, 00:07:57.581 "unmap": true, 00:07:57.581 "flush": true, 00:07:57.581 "reset": true, 00:07:57.581 "nvme_admin": false, 00:07:57.581 "nvme_io": false, 00:07:57.581 "nvme_io_md": false, 00:07:57.581 "write_zeroes": true, 00:07:57.581 "zcopy": true, 00:07:57.581 "get_zone_info": false, 00:07:57.581 "zone_management": false, 00:07:57.581 "zone_append": false, 00:07:57.581 "compare": false, 00:07:57.581 "compare_and_write": false, 00:07:57.581 "abort": true, 00:07:57.581 "seek_hole": false, 00:07:57.581 "seek_data": false, 00:07:57.581 "copy": true, 00:07:57.581 "nvme_iov_md": false 00:07:57.581 }, 00:07:57.581 "memory_domains": [ 00:07:57.581 { 00:07:57.581 "dma_device_id": "system", 00:07:57.581 "dma_device_type": 1 00:07:57.581 }, 00:07:57.581 { 00:07:57.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.581 "dma_device_type": 2 00:07:57.581 } 00:07:57.581 ], 00:07:57.581 "driver_specific": {} 00:07:57.581 } 00:07:57.581 ] 00:07:57.581 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.581 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:07:57.581 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:07:57.581 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:07:57.581 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:57.581 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:57.581 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:07:57.581 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:57.581 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:57.581 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:07:57.581 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:57.581 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:57.581 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:57.581 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:57.581 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:57.581 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.581 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.581 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.581 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.581 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:57.581 "name": "Existed_Raid", 00:07:57.581 "uuid": "2cf45f81-d801-47ad-8d78-b71cebc163ca", 00:07:57.581 "strip_size_kb": 64, 00:07:57.581 "state": "online", 00:07:57.581 "raid_level": "raid0", 00:07:57.581 "superblock": true, 00:07:57.581 "num_base_bdevs": 2, 00:07:57.581 "num_base_bdevs_discovered": 2, 00:07:57.581 "num_base_bdevs_operational": 2, 00:07:57.581 "base_bdevs_list": [ 00:07:57.581 { 00:07:57.581 "name": "BaseBdev1", 00:07:57.581 "uuid": "14567d3b-bfe5-4dfa-bf10-1ff58a27632d", 00:07:57.581 "is_configured": true, 00:07:57.581 "data_offset": 2048, 00:07:57.581 "data_size": 63488 00:07:57.581 }, 00:07:57.581 { 00:07:57.581 "name": "BaseBdev2", 00:07:57.581 "uuid": "bf1f62d1-b24f-447c-a33e-f1df4a1bc2e2", 00:07:57.581 "is_configured": true, 00:07:57.581 "data_offset": 2048, 00:07:57.581 "data_size": 63488 00:07:57.581 } 00:07:57.581 ] 00:07:57.581 }' 00:07:57.581 03:10:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:57.581 03:10:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.839 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:07:57.839 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:07:57.839 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:07:57.839 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:07:57.839 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:07:57.839 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:07:57.839 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:07:57.839 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:07:57.839 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.839 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:57.839 [2024-10-09 03:10:41.079570] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:57.839 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.839 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:07:57.839 "name": "Existed_Raid", 00:07:57.839 "aliases": [ 00:07:57.839 "2cf45f81-d801-47ad-8d78-b71cebc163ca" 00:07:57.839 ], 00:07:57.839 "product_name": "Raid Volume", 00:07:57.839 "block_size": 512, 00:07:57.839 "num_blocks": 126976, 00:07:57.839 "uuid": "2cf45f81-d801-47ad-8d78-b71cebc163ca", 00:07:57.839 "assigned_rate_limits": { 00:07:57.839 "rw_ios_per_sec": 0, 00:07:57.839 "rw_mbytes_per_sec": 0, 00:07:57.839 "r_mbytes_per_sec": 0, 00:07:57.839 "w_mbytes_per_sec": 0 00:07:57.839 }, 00:07:57.839 "claimed": false, 00:07:57.839 "zoned": false, 00:07:57.839 "supported_io_types": { 00:07:57.839 "read": true, 00:07:57.839 "write": true, 00:07:57.839 "unmap": true, 00:07:57.840 "flush": true, 00:07:57.840 "reset": true, 00:07:57.840 "nvme_admin": false, 00:07:57.840 "nvme_io": false, 00:07:57.840 "nvme_io_md": false, 00:07:57.840 "write_zeroes": true, 00:07:57.840 "zcopy": false, 00:07:57.840 "get_zone_info": false, 00:07:57.840 "zone_management": false, 00:07:57.840 "zone_append": false, 00:07:57.840 "compare": false, 00:07:57.840 "compare_and_write": false, 00:07:57.840 "abort": false, 00:07:57.840 "seek_hole": false, 00:07:57.840 "seek_data": false, 00:07:57.840 "copy": false, 00:07:57.840 "nvme_iov_md": false 00:07:57.840 }, 00:07:57.840 "memory_domains": [ 00:07:57.840 { 00:07:57.840 "dma_device_id": "system", 00:07:57.840 "dma_device_type": 1 00:07:57.840 }, 00:07:57.840 { 00:07:57.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.840 "dma_device_type": 2 00:07:57.840 }, 00:07:57.840 { 00:07:57.840 "dma_device_id": "system", 00:07:57.840 "dma_device_type": 1 00:07:57.840 }, 00:07:57.840 { 00:07:57.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.840 "dma_device_type": 2 00:07:57.840 } 00:07:57.840 ], 00:07:57.840 "driver_specific": { 00:07:57.840 "raid": { 00:07:57.840 "uuid": "2cf45f81-d801-47ad-8d78-b71cebc163ca", 00:07:57.840 "strip_size_kb": 64, 00:07:57.840 "state": "online", 00:07:57.840 "raid_level": "raid0", 00:07:57.840 "superblock": true, 00:07:57.840 "num_base_bdevs": 2, 00:07:57.840 "num_base_bdevs_discovered": 2, 00:07:57.840 "num_base_bdevs_operational": 2, 00:07:57.840 "base_bdevs_list": [ 00:07:57.840 { 00:07:57.840 "name": "BaseBdev1", 00:07:57.840 "uuid": "14567d3b-bfe5-4dfa-bf10-1ff58a27632d", 00:07:57.840 "is_configured": true, 00:07:57.840 "data_offset": 2048, 00:07:57.840 "data_size": 63488 00:07:57.840 }, 00:07:57.840 { 00:07:57.840 "name": "BaseBdev2", 00:07:57.840 "uuid": "bf1f62d1-b24f-447c-a33e-f1df4a1bc2e2", 00:07:57.840 "is_configured": true, 00:07:57.840 "data_offset": 2048, 00:07:57.840 "data_size": 63488 00:07:57.840 } 00:07:57.840 ] 00:07:57.840 } 00:07:57.840 } 00:07:57.840 }' 00:07:57.840 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:58.098 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:07:58.098 BaseBdev2' 00:07:58.098 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.098 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:07:58.098 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:58.098 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:07:58.098 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.098 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.098 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.098 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.098 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.098 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:58.098 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:07:58.098 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:07:58.098 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.098 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.098 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:07:58.098 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.098 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:07:58.098 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:07:58.098 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:07:58.098 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.098 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.098 [2024-10-09 03:10:41.287052] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:58.098 [2024-10-09 03:10:41.287182] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:58.098 [2024-10-09 03:10:41.287272] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:58.098 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.098 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:07:58.098 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:07:58.098 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:07:58.098 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:07:58.098 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:07:58.098 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:58.098 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:07:58.098 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:07:58.098 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:07:58.098 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:07:58.098 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:07:58.098 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:07:58.098 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:07:58.098 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:07:58.099 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:07:58.361 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.361 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:58.361 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.361 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.361 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.361 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:07:58.361 "name": "Existed_Raid", 00:07:58.361 "uuid": "2cf45f81-d801-47ad-8d78-b71cebc163ca", 00:07:58.361 "strip_size_kb": 64, 00:07:58.361 "state": "offline", 00:07:58.361 "raid_level": "raid0", 00:07:58.361 "superblock": true, 00:07:58.361 "num_base_bdevs": 2, 00:07:58.361 "num_base_bdevs_discovered": 1, 00:07:58.361 "num_base_bdevs_operational": 1, 00:07:58.361 "base_bdevs_list": [ 00:07:58.361 { 00:07:58.361 "name": null, 00:07:58.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:58.361 "is_configured": false, 00:07:58.361 "data_offset": 0, 00:07:58.361 "data_size": 63488 00:07:58.361 }, 00:07:58.361 { 00:07:58.361 "name": "BaseBdev2", 00:07:58.361 "uuid": "bf1f62d1-b24f-447c-a33e-f1df4a1bc2e2", 00:07:58.361 "is_configured": true, 00:07:58.361 "data_offset": 2048, 00:07:58.361 "data_size": 63488 00:07:58.361 } 00:07:58.361 ] 00:07:58.361 }' 00:07:58.361 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:07:58.361 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.627 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:07:58.627 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:58.627 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.627 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:07:58.627 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.627 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.627 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.627 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:07:58.627 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:58.627 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:07:58.627 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.627 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.627 [2024-10-09 03:10:41.882658] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:58.627 [2024-10-09 03:10:41.882828] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:07:58.885 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.885 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:07:58.885 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:07:58.885 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:07:58.885 03:10:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:07:58.885 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:58.885 03:10:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:58.885 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:58.885 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:07:58.885 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:07:58.885 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:07:58.885 03:10:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61036 00:07:58.885 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 61036 ']' 00:07:58.885 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 61036 00:07:58.885 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:07:58.885 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:58.885 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61036 00:07:58.885 killing process with pid 61036 00:07:58.885 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:58.885 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:58.885 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61036' 00:07:58.885 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 61036 00:07:58.885 [2024-10-09 03:10:42.068091] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:58.885 03:10:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 61036 00:07:58.885 [2024-10-09 03:10:42.085525] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:00.263 ************************************ 00:08:00.263 END TEST raid_state_function_test_sb 00:08:00.263 ************************************ 00:08:00.263 03:10:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:00.263 00:08:00.263 real 0m5.283s 00:08:00.263 user 0m7.372s 00:08:00.263 sys 0m0.884s 00:08:00.263 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:00.263 03:10:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:00.263 03:10:43 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:08:00.263 03:10:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:00.263 03:10:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:00.263 03:10:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:00.263 ************************************ 00:08:00.263 START TEST raid_superblock_test 00:08:00.263 ************************************ 00:08:00.263 03:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 2 00:08:00.263 03:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:00.263 03:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:00.263 03:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:00.263 03:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:00.263 03:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:00.263 03:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:00.263 03:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:00.263 03:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:00.263 03:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:00.263 03:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:00.263 03:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:00.263 03:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:00.263 03:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:00.263 03:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:00.263 03:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:00.263 03:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:00.263 03:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61288 00:08:00.263 03:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:00.263 03:10:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61288 00:08:00.263 03:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 61288 ']' 00:08:00.263 03:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.263 03:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.263 03:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.263 03:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.263 03:10:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:00.522 [2024-10-09 03:10:43.611004] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:08:00.522 [2024-10-09 03:10:43.611120] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61288 ] 00:08:00.522 [2024-10-09 03:10:43.773392] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.780 [2024-10-09 03:10:44.039470] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.038 [2024-10-09 03:10:44.276515] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.038 [2024-10-09 03:10:44.276552] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.297 malloc1 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.297 [2024-10-09 03:10:44.518051] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:01.297 [2024-10-09 03:10:44.518207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:01.297 [2024-10-09 03:10:44.518265] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:01.297 [2024-10-09 03:10:44.518299] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:01.297 [2024-10-09 03:10:44.520699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:01.297 [2024-10-09 03:10:44.520770] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:01.297 pt1 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.297 malloc2 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.297 [2024-10-09 03:10:44.592938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:01.297 [2024-10-09 03:10:44.593083] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:01.297 [2024-10-09 03:10:44.593111] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:01.297 [2024-10-09 03:10:44.593121] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:01.297 [2024-10-09 03:10:44.595481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:01.297 [2024-10-09 03:10:44.595553] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:01.297 pt2 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.297 03:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.556 [2024-10-09 03:10:44.604987] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:01.556 [2024-10-09 03:10:44.607126] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:01.556 [2024-10-09 03:10:44.607302] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:01.556 [2024-10-09 03:10:44.607316] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:01.556 [2024-10-09 03:10:44.607576] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:01.556 [2024-10-09 03:10:44.607742] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:01.556 [2024-10-09 03:10:44.607754] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:01.556 [2024-10-09 03:10:44.607933] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:01.556 03:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.556 03:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:01.556 03:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:01.556 03:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:01.556 03:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:01.556 03:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:01.556 03:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:01.556 03:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:01.556 03:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:01.556 03:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:01.556 03:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:01.556 03:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:01.556 03:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:01.556 03:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.556 03:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.556 03:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.556 03:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:01.556 "name": "raid_bdev1", 00:08:01.556 "uuid": "b44fd173-1969-492b-8ef8-067358bf4582", 00:08:01.556 "strip_size_kb": 64, 00:08:01.556 "state": "online", 00:08:01.556 "raid_level": "raid0", 00:08:01.556 "superblock": true, 00:08:01.556 "num_base_bdevs": 2, 00:08:01.556 "num_base_bdevs_discovered": 2, 00:08:01.556 "num_base_bdevs_operational": 2, 00:08:01.556 "base_bdevs_list": [ 00:08:01.556 { 00:08:01.556 "name": "pt1", 00:08:01.556 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:01.556 "is_configured": true, 00:08:01.556 "data_offset": 2048, 00:08:01.556 "data_size": 63488 00:08:01.556 }, 00:08:01.556 { 00:08:01.556 "name": "pt2", 00:08:01.556 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:01.556 "is_configured": true, 00:08:01.556 "data_offset": 2048, 00:08:01.556 "data_size": 63488 00:08:01.556 } 00:08:01.556 ] 00:08:01.556 }' 00:08:01.556 03:10:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:01.556 03:10:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.815 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:01.815 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:01.815 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:01.815 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:01.815 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:01.815 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:01.815 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:01.815 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:01.815 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.815 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:01.815 [2024-10-09 03:10:45.052495] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:01.815 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.815 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:01.815 "name": "raid_bdev1", 00:08:01.815 "aliases": [ 00:08:01.815 "b44fd173-1969-492b-8ef8-067358bf4582" 00:08:01.815 ], 00:08:01.815 "product_name": "Raid Volume", 00:08:01.815 "block_size": 512, 00:08:01.815 "num_blocks": 126976, 00:08:01.815 "uuid": "b44fd173-1969-492b-8ef8-067358bf4582", 00:08:01.815 "assigned_rate_limits": { 00:08:01.815 "rw_ios_per_sec": 0, 00:08:01.815 "rw_mbytes_per_sec": 0, 00:08:01.815 "r_mbytes_per_sec": 0, 00:08:01.815 "w_mbytes_per_sec": 0 00:08:01.815 }, 00:08:01.815 "claimed": false, 00:08:01.815 "zoned": false, 00:08:01.815 "supported_io_types": { 00:08:01.815 "read": true, 00:08:01.815 "write": true, 00:08:01.815 "unmap": true, 00:08:01.815 "flush": true, 00:08:01.815 "reset": true, 00:08:01.815 "nvme_admin": false, 00:08:01.815 "nvme_io": false, 00:08:01.815 "nvme_io_md": false, 00:08:01.815 "write_zeroes": true, 00:08:01.815 "zcopy": false, 00:08:01.815 "get_zone_info": false, 00:08:01.815 "zone_management": false, 00:08:01.815 "zone_append": false, 00:08:01.815 "compare": false, 00:08:01.815 "compare_and_write": false, 00:08:01.815 "abort": false, 00:08:01.815 "seek_hole": false, 00:08:01.815 "seek_data": false, 00:08:01.815 "copy": false, 00:08:01.815 "nvme_iov_md": false 00:08:01.815 }, 00:08:01.815 "memory_domains": [ 00:08:01.815 { 00:08:01.815 "dma_device_id": "system", 00:08:01.815 "dma_device_type": 1 00:08:01.815 }, 00:08:01.815 { 00:08:01.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.815 "dma_device_type": 2 00:08:01.815 }, 00:08:01.815 { 00:08:01.815 "dma_device_id": "system", 00:08:01.815 "dma_device_type": 1 00:08:01.815 }, 00:08:01.815 { 00:08:01.815 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:01.815 "dma_device_type": 2 00:08:01.815 } 00:08:01.815 ], 00:08:01.815 "driver_specific": { 00:08:01.815 "raid": { 00:08:01.815 "uuid": "b44fd173-1969-492b-8ef8-067358bf4582", 00:08:01.815 "strip_size_kb": 64, 00:08:01.815 "state": "online", 00:08:01.815 "raid_level": "raid0", 00:08:01.815 "superblock": true, 00:08:01.815 "num_base_bdevs": 2, 00:08:01.815 "num_base_bdevs_discovered": 2, 00:08:01.815 "num_base_bdevs_operational": 2, 00:08:01.815 "base_bdevs_list": [ 00:08:01.815 { 00:08:01.815 "name": "pt1", 00:08:01.815 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:01.815 "is_configured": true, 00:08:01.815 "data_offset": 2048, 00:08:01.815 "data_size": 63488 00:08:01.815 }, 00:08:01.815 { 00:08:01.815 "name": "pt2", 00:08:01.815 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:01.815 "is_configured": true, 00:08:01.815 "data_offset": 2048, 00:08:01.815 "data_size": 63488 00:08:01.815 } 00:08:01.815 ] 00:08:01.815 } 00:08:01.815 } 00:08:01.815 }' 00:08:01.815 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:01.815 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:01.815 pt2' 00:08:02.074 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.074 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:02.074 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.074 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.074 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:02.074 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.074 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.074 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.074 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.074 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.074 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:02.074 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:02.074 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.074 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.074 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:02.074 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.074 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:02.074 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:02.075 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:02.075 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.075 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.075 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:02.075 [2024-10-09 03:10:45.260065] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:02.075 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.075 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b44fd173-1969-492b-8ef8-067358bf4582 00:08:02.075 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b44fd173-1969-492b-8ef8-067358bf4582 ']' 00:08:02.075 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:02.075 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.075 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.075 [2024-10-09 03:10:45.307724] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:02.075 [2024-10-09 03:10:45.307770] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:02.075 [2024-10-09 03:10:45.307891] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:02.075 [2024-10-09 03:10:45.307948] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:02.075 [2024-10-09 03:10:45.307960] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:02.075 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.075 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.075 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:02.075 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.075 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.075 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.075 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:02.075 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:02.075 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:02.075 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:02.075 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.075 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.334 [2024-10-09 03:10:45.447493] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:02.334 [2024-10-09 03:10:45.449793] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:02.334 [2024-10-09 03:10:45.449932] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:02.334 [2024-10-09 03:10:45.449995] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:02.334 [2024-10-09 03:10:45.450011] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:02.334 [2024-10-09 03:10:45.450024] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:02.334 request: 00:08:02.334 { 00:08:02.334 "name": "raid_bdev1", 00:08:02.334 "raid_level": "raid0", 00:08:02.334 "base_bdevs": [ 00:08:02.334 "malloc1", 00:08:02.334 "malloc2" 00:08:02.334 ], 00:08:02.334 "strip_size_kb": 64, 00:08:02.334 "superblock": false, 00:08:02.334 "method": "bdev_raid_create", 00:08:02.334 "req_id": 1 00:08:02.334 } 00:08:02.334 Got JSON-RPC error response 00:08:02.334 response: 00:08:02.334 { 00:08:02.334 "code": -17, 00:08:02.334 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:02.334 } 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.334 [2024-10-09 03:10:45.511419] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:02.334 [2024-10-09 03:10:45.511601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.334 [2024-10-09 03:10:45.511642] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:02.334 [2024-10-09 03:10:45.511686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.334 [2024-10-09 03:10:45.514295] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.334 [2024-10-09 03:10:45.514384] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:02.334 [2024-10-09 03:10:45.514507] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:02.334 [2024-10-09 03:10:45.514584] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:02.334 pt1 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:02.334 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:02.335 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.335 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.335 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.335 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.335 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.335 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.335 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.335 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:02.335 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.335 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.335 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.335 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.335 "name": "raid_bdev1", 00:08:02.335 "uuid": "b44fd173-1969-492b-8ef8-067358bf4582", 00:08:02.335 "strip_size_kb": 64, 00:08:02.335 "state": "configuring", 00:08:02.335 "raid_level": "raid0", 00:08:02.335 "superblock": true, 00:08:02.335 "num_base_bdevs": 2, 00:08:02.335 "num_base_bdevs_discovered": 1, 00:08:02.335 "num_base_bdevs_operational": 2, 00:08:02.335 "base_bdevs_list": [ 00:08:02.335 { 00:08:02.335 "name": "pt1", 00:08:02.335 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:02.335 "is_configured": true, 00:08:02.335 "data_offset": 2048, 00:08:02.335 "data_size": 63488 00:08:02.335 }, 00:08:02.335 { 00:08:02.335 "name": null, 00:08:02.335 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:02.335 "is_configured": false, 00:08:02.335 "data_offset": 2048, 00:08:02.335 "data_size": 63488 00:08:02.335 } 00:08:02.335 ] 00:08:02.335 }' 00:08:02.335 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.335 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.903 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:02.903 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:02.903 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:02.903 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:02.903 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.903 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.903 [2024-10-09 03:10:45.958656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:02.903 [2024-10-09 03:10:45.958878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:02.903 [2024-10-09 03:10:45.958927] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:02.903 [2024-10-09 03:10:45.958967] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:02.903 [2024-10-09 03:10:45.959595] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:02.903 [2024-10-09 03:10:45.959668] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:02.903 [2024-10-09 03:10:45.959804] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:02.903 [2024-10-09 03:10:45.959871] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:02.903 [2024-10-09 03:10:45.960052] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:02.903 [2024-10-09 03:10:45.960096] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:02.903 [2024-10-09 03:10:45.960392] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:02.903 [2024-10-09 03:10:45.960613] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:02.903 [2024-10-09 03:10:45.960654] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:02.903 [2024-10-09 03:10:45.960863] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:02.903 pt2 00:08:02.903 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.903 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:02.903 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:02.903 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:02.903 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:02.903 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:02.903 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:02.903 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:02.903 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:02.903 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:02.903 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:02.903 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:02.903 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:02.903 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:02.903 03:10:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:02.903 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:02.903 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:02.903 03:10:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:02.903 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:02.903 "name": "raid_bdev1", 00:08:02.903 "uuid": "b44fd173-1969-492b-8ef8-067358bf4582", 00:08:02.903 "strip_size_kb": 64, 00:08:02.903 "state": "online", 00:08:02.903 "raid_level": "raid0", 00:08:02.903 "superblock": true, 00:08:02.903 "num_base_bdevs": 2, 00:08:02.903 "num_base_bdevs_discovered": 2, 00:08:02.903 "num_base_bdevs_operational": 2, 00:08:02.903 "base_bdevs_list": [ 00:08:02.903 { 00:08:02.903 "name": "pt1", 00:08:02.903 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:02.903 "is_configured": true, 00:08:02.903 "data_offset": 2048, 00:08:02.903 "data_size": 63488 00:08:02.903 }, 00:08:02.903 { 00:08:02.903 "name": "pt2", 00:08:02.903 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:02.903 "is_configured": true, 00:08:02.903 "data_offset": 2048, 00:08:02.903 "data_size": 63488 00:08:02.903 } 00:08:02.903 ] 00:08:02.903 }' 00:08:02.903 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:02.903 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.161 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:03.161 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:03.162 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:03.162 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:03.162 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:03.162 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:03.162 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:03.162 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.162 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.162 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:03.162 [2024-10-09 03:10:46.422244] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:03.162 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.420 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:03.420 "name": "raid_bdev1", 00:08:03.420 "aliases": [ 00:08:03.420 "b44fd173-1969-492b-8ef8-067358bf4582" 00:08:03.420 ], 00:08:03.420 "product_name": "Raid Volume", 00:08:03.420 "block_size": 512, 00:08:03.420 "num_blocks": 126976, 00:08:03.420 "uuid": "b44fd173-1969-492b-8ef8-067358bf4582", 00:08:03.420 "assigned_rate_limits": { 00:08:03.420 "rw_ios_per_sec": 0, 00:08:03.420 "rw_mbytes_per_sec": 0, 00:08:03.420 "r_mbytes_per_sec": 0, 00:08:03.420 "w_mbytes_per_sec": 0 00:08:03.420 }, 00:08:03.420 "claimed": false, 00:08:03.420 "zoned": false, 00:08:03.420 "supported_io_types": { 00:08:03.420 "read": true, 00:08:03.420 "write": true, 00:08:03.420 "unmap": true, 00:08:03.420 "flush": true, 00:08:03.420 "reset": true, 00:08:03.420 "nvme_admin": false, 00:08:03.420 "nvme_io": false, 00:08:03.420 "nvme_io_md": false, 00:08:03.420 "write_zeroes": true, 00:08:03.420 "zcopy": false, 00:08:03.420 "get_zone_info": false, 00:08:03.420 "zone_management": false, 00:08:03.420 "zone_append": false, 00:08:03.420 "compare": false, 00:08:03.420 "compare_and_write": false, 00:08:03.420 "abort": false, 00:08:03.420 "seek_hole": false, 00:08:03.420 "seek_data": false, 00:08:03.420 "copy": false, 00:08:03.420 "nvme_iov_md": false 00:08:03.420 }, 00:08:03.420 "memory_domains": [ 00:08:03.420 { 00:08:03.420 "dma_device_id": "system", 00:08:03.420 "dma_device_type": 1 00:08:03.420 }, 00:08:03.420 { 00:08:03.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.420 "dma_device_type": 2 00:08:03.420 }, 00:08:03.420 { 00:08:03.420 "dma_device_id": "system", 00:08:03.420 "dma_device_type": 1 00:08:03.420 }, 00:08:03.420 { 00:08:03.420 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.420 "dma_device_type": 2 00:08:03.420 } 00:08:03.420 ], 00:08:03.420 "driver_specific": { 00:08:03.420 "raid": { 00:08:03.420 "uuid": "b44fd173-1969-492b-8ef8-067358bf4582", 00:08:03.420 "strip_size_kb": 64, 00:08:03.420 "state": "online", 00:08:03.420 "raid_level": "raid0", 00:08:03.420 "superblock": true, 00:08:03.421 "num_base_bdevs": 2, 00:08:03.421 "num_base_bdevs_discovered": 2, 00:08:03.421 "num_base_bdevs_operational": 2, 00:08:03.421 "base_bdevs_list": [ 00:08:03.421 { 00:08:03.421 "name": "pt1", 00:08:03.421 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:03.421 "is_configured": true, 00:08:03.421 "data_offset": 2048, 00:08:03.421 "data_size": 63488 00:08:03.421 }, 00:08:03.421 { 00:08:03.421 "name": "pt2", 00:08:03.421 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:03.421 "is_configured": true, 00:08:03.421 "data_offset": 2048, 00:08:03.421 "data_size": 63488 00:08:03.421 } 00:08:03.421 ] 00:08:03.421 } 00:08:03.421 } 00:08:03.421 }' 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:03.421 pt2' 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:03.421 [2024-10-09 03:10:46.625744] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b44fd173-1969-492b-8ef8-067358bf4582 '!=' b44fd173-1969-492b-8ef8-067358bf4582 ']' 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61288 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 61288 ']' 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 61288 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61288 00:08:03.421 killing process with pid 61288 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61288' 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 61288 00:08:03.421 [2024-10-09 03:10:46.697165] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:03.421 [2024-10-09 03:10:46.697261] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:03.421 [2024-10-09 03:10:46.697318] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:03.421 [2024-10-09 03:10:46.697330] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:03.421 03:10:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 61288 00:08:03.680 [2024-10-09 03:10:46.922509] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:05.057 03:10:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:05.057 00:08:05.057 real 0m4.790s 00:08:05.057 user 0m6.453s 00:08:05.057 sys 0m0.839s 00:08:05.057 ************************************ 00:08:05.057 END TEST raid_superblock_test 00:08:05.057 ************************************ 00:08:05.057 03:10:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:05.057 03:10:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.316 03:10:48 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:08:05.316 03:10:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:05.316 03:10:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:05.316 03:10:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:05.316 ************************************ 00:08:05.316 START TEST raid_read_error_test 00:08:05.316 ************************************ 00:08:05.316 03:10:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 read 00:08:05.316 03:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:05.316 03:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:05.316 03:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:05.316 03:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:05.316 03:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:05.316 03:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:05.316 03:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:05.316 03:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:05.316 03:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:05.316 03:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:05.316 03:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:05.316 03:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:05.316 03:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:05.316 03:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:05.316 03:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:05.316 03:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:05.316 03:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:05.316 03:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:05.316 03:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:05.316 03:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:05.316 03:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:05.316 03:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:05.316 03:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.GTIEvHlxkW 00:08:05.316 03:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61505 00:08:05.316 03:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61505 00:08:05.316 03:10:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:05.316 03:10:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 61505 ']' 00:08:05.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.316 03:10:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.316 03:10:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:05.316 03:10:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.316 03:10:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:05.316 03:10:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:05.316 [2024-10-09 03:10:48.499912] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:08:05.316 [2024-10-09 03:10:48.500046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61505 ] 00:08:05.610 [2024-10-09 03:10:48.667232] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:05.868 [2024-10-09 03:10:48.925415] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:05.868 [2024-10-09 03:10:49.160764] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:05.868 [2024-10-09 03:10:49.160818] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:06.127 03:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:06.127 03:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:06.127 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:06.127 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:06.127 03:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.127 03:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.127 BaseBdev1_malloc 00:08:06.127 03:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.127 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:06.127 03:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.127 03:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.127 true 00:08:06.127 03:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.127 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:06.127 03:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.127 03:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.127 [2024-10-09 03:10:49.386059] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:06.127 [2024-10-09 03:10:49.386124] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:06.127 [2024-10-09 03:10:49.386142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:06.127 [2024-10-09 03:10:49.386153] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:06.127 [2024-10-09 03:10:49.388463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:06.127 [2024-10-09 03:10:49.388561] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:06.127 BaseBdev1 00:08:06.127 03:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.127 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:06.127 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:06.127 03:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.127 03:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.386 BaseBdev2_malloc 00:08:06.386 03:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.386 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:06.386 03:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.386 03:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.386 true 00:08:06.386 03:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.386 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:06.386 03:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.386 03:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.386 [2024-10-09 03:10:49.471207] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:06.386 [2024-10-09 03:10:49.471279] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:06.386 [2024-10-09 03:10:49.471298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:06.386 [2024-10-09 03:10:49.471310] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:06.386 [2024-10-09 03:10:49.473800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:06.386 [2024-10-09 03:10:49.473935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:06.386 BaseBdev2 00:08:06.386 03:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.386 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:06.386 03:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.386 03:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.386 [2024-10-09 03:10:49.483268] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:06.386 [2024-10-09 03:10:49.485439] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:06.386 [2024-10-09 03:10:49.485749] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:06.386 [2024-10-09 03:10:49.485770] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:06.387 [2024-10-09 03:10:49.486062] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:06.387 [2024-10-09 03:10:49.486239] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:06.387 [2024-10-09 03:10:49.486251] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:06.387 [2024-10-09 03:10:49.486419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:06.387 03:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.387 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:06.387 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:06.387 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:06.387 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:06.387 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:06.387 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:06.387 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:06.387 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:06.387 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:06.387 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:06.387 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:06.387 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:06.387 03:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:06.387 03:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.387 03:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:06.387 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:06.387 "name": "raid_bdev1", 00:08:06.387 "uuid": "2d68df32-0094-441e-ade5-d021cd9aea53", 00:08:06.387 "strip_size_kb": 64, 00:08:06.387 "state": "online", 00:08:06.387 "raid_level": "raid0", 00:08:06.387 "superblock": true, 00:08:06.387 "num_base_bdevs": 2, 00:08:06.387 "num_base_bdevs_discovered": 2, 00:08:06.387 "num_base_bdevs_operational": 2, 00:08:06.387 "base_bdevs_list": [ 00:08:06.387 { 00:08:06.387 "name": "BaseBdev1", 00:08:06.387 "uuid": "ee48c78b-1f05-5e6a-92f3-d9434a3dc7f8", 00:08:06.387 "is_configured": true, 00:08:06.387 "data_offset": 2048, 00:08:06.387 "data_size": 63488 00:08:06.387 }, 00:08:06.387 { 00:08:06.387 "name": "BaseBdev2", 00:08:06.387 "uuid": "ee0f6535-113c-53bb-835a-38a98c1fe24c", 00:08:06.387 "is_configured": true, 00:08:06.387 "data_offset": 2048, 00:08:06.387 "data_size": 63488 00:08:06.387 } 00:08:06.387 ] 00:08:06.387 }' 00:08:06.387 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:06.387 03:10:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:06.646 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:06.646 03:10:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:06.904 [2024-10-09 03:10:49.976052] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:07.841 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:07.841 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.841 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.841 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.841 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:07.841 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:07.841 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:07.841 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:07.841 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:07.841 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:07.841 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:07.841 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:07.841 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:07.841 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:07.841 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:07.841 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:07.841 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:07.841 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:07.841 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:07.841 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:07.841 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.841 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:07.841 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:07.841 "name": "raid_bdev1", 00:08:07.841 "uuid": "2d68df32-0094-441e-ade5-d021cd9aea53", 00:08:07.841 "strip_size_kb": 64, 00:08:07.841 "state": "online", 00:08:07.841 "raid_level": "raid0", 00:08:07.841 "superblock": true, 00:08:07.841 "num_base_bdevs": 2, 00:08:07.841 "num_base_bdevs_discovered": 2, 00:08:07.841 "num_base_bdevs_operational": 2, 00:08:07.841 "base_bdevs_list": [ 00:08:07.841 { 00:08:07.841 "name": "BaseBdev1", 00:08:07.841 "uuid": "ee48c78b-1f05-5e6a-92f3-d9434a3dc7f8", 00:08:07.841 "is_configured": true, 00:08:07.841 "data_offset": 2048, 00:08:07.841 "data_size": 63488 00:08:07.841 }, 00:08:07.841 { 00:08:07.841 "name": "BaseBdev2", 00:08:07.841 "uuid": "ee0f6535-113c-53bb-835a-38a98c1fe24c", 00:08:07.841 "is_configured": true, 00:08:07.841 "data_offset": 2048, 00:08:07.841 "data_size": 63488 00:08:07.841 } 00:08:07.841 ] 00:08:07.841 }' 00:08:07.841 03:10:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:07.841 03:10:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.100 03:10:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:08.100 03:10:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:08.100 03:10:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.100 [2024-10-09 03:10:51.320929] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:08.100 [2024-10-09 03:10:51.321068] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:08.100 [2024-10-09 03:10:51.323776] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:08.100 [2024-10-09 03:10:51.323886] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:08.100 [2024-10-09 03:10:51.323947] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:08.100 [2024-10-09 03:10:51.323994] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:08.100 { 00:08:08.100 "results": [ 00:08:08.100 { 00:08:08.100 "job": "raid_bdev1", 00:08:08.100 "core_mask": "0x1", 00:08:08.100 "workload": "randrw", 00:08:08.100 "percentage": 50, 00:08:08.100 "status": "finished", 00:08:08.100 "queue_depth": 1, 00:08:08.100 "io_size": 131072, 00:08:08.100 "runtime": 1.345458, 00:08:08.100 "iops": 14240.503977084383, 00:08:08.100 "mibps": 1780.062997135548, 00:08:08.100 "io_failed": 1, 00:08:08.100 "io_timeout": 0, 00:08:08.100 "avg_latency_us": 98.67459288324241, 00:08:08.100 "min_latency_us": 26.829694323144103, 00:08:08.100 "max_latency_us": 1438.071615720524 00:08:08.100 } 00:08:08.100 ], 00:08:08.100 "core_count": 1 00:08:08.100 } 00:08:08.100 03:10:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:08.100 03:10:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61505 00:08:08.100 03:10:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 61505 ']' 00:08:08.100 03:10:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 61505 00:08:08.100 03:10:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:08.100 03:10:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:08.100 03:10:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61505 00:08:08.100 killing process with pid 61505 00:08:08.100 03:10:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:08.100 03:10:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:08.100 03:10:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61505' 00:08:08.100 03:10:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 61505 00:08:08.100 [2024-10-09 03:10:51.365671] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:08.100 03:10:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 61505 00:08:08.358 [2024-10-09 03:10:51.523730] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:09.736 03:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.GTIEvHlxkW 00:08:09.736 03:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:09.736 03:10:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:09.736 03:10:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:08:09.736 03:10:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:09.736 03:10:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:09.736 03:10:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:09.736 03:10:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:08:09.736 00:08:09.736 real 0m4.628s 00:08:09.737 user 0m5.283s 00:08:09.737 sys 0m0.648s 00:08:09.737 03:10:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.737 ************************************ 00:08:09.737 END TEST raid_read_error_test 00:08:09.737 ************************************ 00:08:09.737 03:10:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.995 03:10:53 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:08:09.995 03:10:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:09.995 03:10:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.995 03:10:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:09.995 ************************************ 00:08:09.995 START TEST raid_write_error_test 00:08:09.995 ************************************ 00:08:09.995 03:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 2 write 00:08:09.995 03:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:09.995 03:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:09.995 03:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:09.995 03:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:09.995 03:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:09.995 03:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:09.995 03:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:09.995 03:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:09.995 03:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:09.995 03:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:09.995 03:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:09.995 03:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:09.995 03:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:09.995 03:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:09.995 03:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:09.995 03:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:09.995 03:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:09.995 03:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:09.995 03:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:09.995 03:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:09.995 03:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:09.995 03:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:09.995 03:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.gRRHKldHCe 00:08:09.995 03:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61645 00:08:09.995 03:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:09.995 03:10:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61645 00:08:09.995 03:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 61645 ']' 00:08:09.995 03:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.995 03:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:09.995 03:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.995 03:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:09.995 03:10:53 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:09.995 [2024-10-09 03:10:53.198906] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:08:09.995 [2024-10-09 03:10:53.199111] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61645 ] 00:08:10.253 [2024-10-09 03:10:53.361409] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:10.511 [2024-10-09 03:10:53.633979] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:10.770 [2024-10-09 03:10:53.879433] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.770 [2024-10-09 03:10:53.879596] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:10.770 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:10.770 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:10.770 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:10.770 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:10.770 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:10.770 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.037 BaseBdev1_malloc 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.037 true 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.037 [2024-10-09 03:10:54.110018] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:11.037 [2024-10-09 03:10:54.110101] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.037 [2024-10-09 03:10:54.110123] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:11.037 [2024-10-09 03:10:54.110136] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.037 [2024-10-09 03:10:54.112892] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.037 [2024-10-09 03:10:54.112939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:11.037 BaseBdev1 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.037 BaseBdev2_malloc 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.037 true 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.037 [2024-10-09 03:10:54.193567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:11.037 [2024-10-09 03:10:54.193650] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:11.037 [2024-10-09 03:10:54.193672] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:11.037 [2024-10-09 03:10:54.193684] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:11.037 [2024-10-09 03:10:54.196173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:11.037 [2024-10-09 03:10:54.196291] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:11.037 BaseBdev2 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.037 [2024-10-09 03:10:54.205659] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:11.037 [2024-10-09 03:10:54.208042] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:11.037 [2024-10-09 03:10:54.208285] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:11.037 [2024-10-09 03:10:54.208302] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:11.037 [2024-10-09 03:10:54.208612] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:11.037 [2024-10-09 03:10:54.208822] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:11.037 [2024-10-09 03:10:54.208834] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:11.037 [2024-10-09 03:10:54.209101] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:11.037 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:11.038 "name": "raid_bdev1", 00:08:11.038 "uuid": "bf51e943-0968-443a-9b50-61c534584186", 00:08:11.038 "strip_size_kb": 64, 00:08:11.038 "state": "online", 00:08:11.038 "raid_level": "raid0", 00:08:11.038 "superblock": true, 00:08:11.038 "num_base_bdevs": 2, 00:08:11.038 "num_base_bdevs_discovered": 2, 00:08:11.038 "num_base_bdevs_operational": 2, 00:08:11.038 "base_bdevs_list": [ 00:08:11.038 { 00:08:11.038 "name": "BaseBdev1", 00:08:11.038 "uuid": "c789233a-ac32-554e-b856-0a81b991db72", 00:08:11.038 "is_configured": true, 00:08:11.038 "data_offset": 2048, 00:08:11.038 "data_size": 63488 00:08:11.038 }, 00:08:11.038 { 00:08:11.038 "name": "BaseBdev2", 00:08:11.038 "uuid": "5513bbc0-54e2-5c38-add1-17e834933524", 00:08:11.038 "is_configured": true, 00:08:11.038 "data_offset": 2048, 00:08:11.038 "data_size": 63488 00:08:11.038 } 00:08:11.038 ] 00:08:11.038 }' 00:08:11.038 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:11.038 03:10:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.618 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:11.618 03:10:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:11.618 [2024-10-09 03:10:54.762164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:12.554 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:12.554 03:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.554 03:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.554 03:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.554 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:12.554 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:12.554 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:12.554 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:12.554 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:12.554 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:12.554 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:12.554 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:12.554 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:12.554 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:12.554 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:12.554 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:12.554 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:12.554 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:12.554 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:12.554 03:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.554 03:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:12.554 03:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.554 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:12.554 "name": "raid_bdev1", 00:08:12.554 "uuid": "bf51e943-0968-443a-9b50-61c534584186", 00:08:12.554 "strip_size_kb": 64, 00:08:12.554 "state": "online", 00:08:12.554 "raid_level": "raid0", 00:08:12.554 "superblock": true, 00:08:12.554 "num_base_bdevs": 2, 00:08:12.554 "num_base_bdevs_discovered": 2, 00:08:12.554 "num_base_bdevs_operational": 2, 00:08:12.554 "base_bdevs_list": [ 00:08:12.554 { 00:08:12.554 "name": "BaseBdev1", 00:08:12.554 "uuid": "c789233a-ac32-554e-b856-0a81b991db72", 00:08:12.554 "is_configured": true, 00:08:12.554 "data_offset": 2048, 00:08:12.554 "data_size": 63488 00:08:12.554 }, 00:08:12.554 { 00:08:12.554 "name": "BaseBdev2", 00:08:12.554 "uuid": "5513bbc0-54e2-5c38-add1-17e834933524", 00:08:12.554 "is_configured": true, 00:08:12.554 "data_offset": 2048, 00:08:12.554 "data_size": 63488 00:08:12.554 } 00:08:12.554 ] 00:08:12.554 }' 00:08:12.554 03:10:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:12.554 03:10:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.121 03:10:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:13.121 03:10:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:13.121 03:10:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.121 [2024-10-09 03:10:56.167762] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:13.121 [2024-10-09 03:10:56.167922] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:13.121 [2024-10-09 03:10:56.170718] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:13.121 [2024-10-09 03:10:56.170815] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:13.121 [2024-10-09 03:10:56.170871] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:13.121 [2024-10-09 03:10:56.170885] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:13.121 { 00:08:13.121 "results": [ 00:08:13.121 { 00:08:13.121 "job": "raid_bdev1", 00:08:13.121 "core_mask": "0x1", 00:08:13.121 "workload": "randrw", 00:08:13.121 "percentage": 50, 00:08:13.121 "status": "finished", 00:08:13.121 "queue_depth": 1, 00:08:13.121 "io_size": 131072, 00:08:13.121 "runtime": 1.406165, 00:08:13.121 "iops": 12906.735696024292, 00:08:13.121 "mibps": 1613.3419620030365, 00:08:13.121 "io_failed": 1, 00:08:13.121 "io_timeout": 0, 00:08:13.121 "avg_latency_us": 108.98556589315145, 00:08:13.121 "min_latency_us": 27.94759825327511, 00:08:13.121 "max_latency_us": 1659.8637554585152 00:08:13.121 } 00:08:13.121 ], 00:08:13.121 "core_count": 1 00:08:13.121 } 00:08:13.121 03:10:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:13.121 03:10:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61645 00:08:13.121 03:10:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 61645 ']' 00:08:13.121 03:10:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 61645 00:08:13.121 03:10:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:13.121 03:10:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:13.121 03:10:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61645 00:08:13.121 03:10:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:13.121 03:10:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:13.121 03:10:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61645' 00:08:13.121 killing process with pid 61645 00:08:13.121 03:10:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 61645 00:08:13.121 03:10:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 61645 00:08:13.121 [2024-10-09 03:10:56.213898] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:13.121 [2024-10-09 03:10:56.377973] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:15.024 03:10:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.gRRHKldHCe 00:08:15.024 03:10:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:15.024 03:10:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:15.024 03:10:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:08:15.024 03:10:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:15.024 ************************************ 00:08:15.024 END TEST raid_write_error_test 00:08:15.024 ************************************ 00:08:15.024 03:10:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:15.024 03:10:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:15.024 03:10:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:08:15.024 00:08:15.024 real 0m4.854s 00:08:15.024 user 0m5.635s 00:08:15.024 sys 0m0.653s 00:08:15.024 03:10:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:15.024 03:10:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.024 03:10:57 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:15.024 03:10:57 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:08:15.024 03:10:57 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:15.024 03:10:57 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:15.024 03:10:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:15.024 ************************************ 00:08:15.024 START TEST raid_state_function_test 00:08:15.024 ************************************ 00:08:15.024 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 false 00:08:15.024 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:15.024 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:15.024 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:15.024 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:15.024 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:15.024 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:15.024 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:15.024 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:15.024 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:15.024 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:15.024 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:15.024 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:15.024 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:15.024 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:15.024 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:15.024 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:15.024 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:15.024 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:15.024 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:15.024 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:15.024 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:15.024 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:15.024 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:15.024 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61794 00:08:15.025 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:15.025 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61794' 00:08:15.025 Process raid pid: 61794 00:08:15.025 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61794 00:08:15.025 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 61794 ']' 00:08:15.025 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:15.025 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:15.025 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:15.025 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:15.025 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:15.025 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.025 [2024-10-09 03:10:58.104299] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:08:15.025 [2024-10-09 03:10:58.104521] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:15.025 [2024-10-09 03:10:58.267889] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.284 [2024-10-09 03:10:58.544047] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:15.542 [2024-10-09 03:10:58.802683] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:15.542 [2024-10-09 03:10:58.802881] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:15.801 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:15.801 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:15.801 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:15.801 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.801 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.801 [2024-10-09 03:10:58.986428] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:15.801 [2024-10-09 03:10:58.986578] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:15.801 [2024-10-09 03:10:58.986615] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:15.801 [2024-10-09 03:10:58.986642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:15.801 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.801 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:15.801 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:15.801 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:15.801 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:15.801 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:15.801 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:15.801 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:15.801 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:15.801 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:15.801 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:15.801 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:15.801 03:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:15.801 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:15.801 03:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.801 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:15.801 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:15.801 "name": "Existed_Raid", 00:08:15.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.801 "strip_size_kb": 64, 00:08:15.801 "state": "configuring", 00:08:15.801 "raid_level": "concat", 00:08:15.801 "superblock": false, 00:08:15.801 "num_base_bdevs": 2, 00:08:15.801 "num_base_bdevs_discovered": 0, 00:08:15.801 "num_base_bdevs_operational": 2, 00:08:15.801 "base_bdevs_list": [ 00:08:15.801 { 00:08:15.801 "name": "BaseBdev1", 00:08:15.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.801 "is_configured": false, 00:08:15.801 "data_offset": 0, 00:08:15.801 "data_size": 0 00:08:15.801 }, 00:08:15.801 { 00:08:15.801 "name": "BaseBdev2", 00:08:15.801 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:15.801 "is_configured": false, 00:08:15.801 "data_offset": 0, 00:08:15.801 "data_size": 0 00:08:15.801 } 00:08:15.801 ] 00:08:15.801 }' 00:08:15.801 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:15.801 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.368 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:16.368 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.368 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.368 [2024-10-09 03:10:59.377945] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:16.368 [2024-10-09 03:10:59.378085] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:16.368 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.368 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:16.368 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.368 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.368 [2024-10-09 03:10:59.389950] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:16.368 [2024-10-09 03:10:59.390091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:16.368 [2024-10-09 03:10:59.390123] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:16.368 [2024-10-09 03:10:59.390156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:16.368 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.368 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:16.368 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.368 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.368 [2024-10-09 03:10:59.470803] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:16.368 BaseBdev1 00:08:16.368 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.368 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:16.369 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:16.369 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:16.369 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:16.369 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:16.369 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:16.369 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:16.369 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.369 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.369 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.369 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:16.369 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.369 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.369 [ 00:08:16.369 { 00:08:16.369 "name": "BaseBdev1", 00:08:16.369 "aliases": [ 00:08:16.369 "ce436ea7-bfef-4e30-bdda-2e5620fc0062" 00:08:16.369 ], 00:08:16.369 "product_name": "Malloc disk", 00:08:16.369 "block_size": 512, 00:08:16.369 "num_blocks": 65536, 00:08:16.369 "uuid": "ce436ea7-bfef-4e30-bdda-2e5620fc0062", 00:08:16.369 "assigned_rate_limits": { 00:08:16.369 "rw_ios_per_sec": 0, 00:08:16.369 "rw_mbytes_per_sec": 0, 00:08:16.369 "r_mbytes_per_sec": 0, 00:08:16.369 "w_mbytes_per_sec": 0 00:08:16.369 }, 00:08:16.369 "claimed": true, 00:08:16.369 "claim_type": "exclusive_write", 00:08:16.369 "zoned": false, 00:08:16.369 "supported_io_types": { 00:08:16.369 "read": true, 00:08:16.369 "write": true, 00:08:16.369 "unmap": true, 00:08:16.369 "flush": true, 00:08:16.369 "reset": true, 00:08:16.369 "nvme_admin": false, 00:08:16.369 "nvme_io": false, 00:08:16.369 "nvme_io_md": false, 00:08:16.369 "write_zeroes": true, 00:08:16.369 "zcopy": true, 00:08:16.369 "get_zone_info": false, 00:08:16.369 "zone_management": false, 00:08:16.369 "zone_append": false, 00:08:16.369 "compare": false, 00:08:16.369 "compare_and_write": false, 00:08:16.369 "abort": true, 00:08:16.369 "seek_hole": false, 00:08:16.369 "seek_data": false, 00:08:16.369 "copy": true, 00:08:16.369 "nvme_iov_md": false 00:08:16.369 }, 00:08:16.369 "memory_domains": [ 00:08:16.369 { 00:08:16.369 "dma_device_id": "system", 00:08:16.369 "dma_device_type": 1 00:08:16.369 }, 00:08:16.369 { 00:08:16.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.369 "dma_device_type": 2 00:08:16.369 } 00:08:16.369 ], 00:08:16.369 "driver_specific": {} 00:08:16.369 } 00:08:16.369 ] 00:08:16.369 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.369 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:16.369 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:16.369 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.369 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.369 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:16.369 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.369 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:16.369 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.369 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.369 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.369 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.369 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.369 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.369 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.369 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.369 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.369 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.369 "name": "Existed_Raid", 00:08:16.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.369 "strip_size_kb": 64, 00:08:16.369 "state": "configuring", 00:08:16.369 "raid_level": "concat", 00:08:16.369 "superblock": false, 00:08:16.369 "num_base_bdevs": 2, 00:08:16.369 "num_base_bdevs_discovered": 1, 00:08:16.369 "num_base_bdevs_operational": 2, 00:08:16.369 "base_bdevs_list": [ 00:08:16.369 { 00:08:16.369 "name": "BaseBdev1", 00:08:16.369 "uuid": "ce436ea7-bfef-4e30-bdda-2e5620fc0062", 00:08:16.369 "is_configured": true, 00:08:16.369 "data_offset": 0, 00:08:16.369 "data_size": 65536 00:08:16.369 }, 00:08:16.369 { 00:08:16.369 "name": "BaseBdev2", 00:08:16.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.369 "is_configured": false, 00:08:16.369 "data_offset": 0, 00:08:16.369 "data_size": 0 00:08:16.369 } 00:08:16.369 ] 00:08:16.369 }' 00:08:16.369 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.369 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.935 03:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:16.935 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.935 03:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.935 [2024-10-09 03:11:00.002011] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:16.935 [2024-10-09 03:11:00.002097] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:16.935 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.935 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:16.935 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.935 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.935 [2024-10-09 03:11:00.014007] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:16.935 [2024-10-09 03:11:00.016251] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:16.935 [2024-10-09 03:11:00.016304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:16.935 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.935 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:16.935 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:16.935 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:16.935 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:16.935 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:16.935 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:16.935 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:16.935 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:16.935 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:16.935 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:16.935 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:16.935 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:16.935 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:16.935 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:16.935 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.935 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.935 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.935 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:16.935 "name": "Existed_Raid", 00:08:16.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.935 "strip_size_kb": 64, 00:08:16.935 "state": "configuring", 00:08:16.935 "raid_level": "concat", 00:08:16.935 "superblock": false, 00:08:16.935 "num_base_bdevs": 2, 00:08:16.935 "num_base_bdevs_discovered": 1, 00:08:16.935 "num_base_bdevs_operational": 2, 00:08:16.935 "base_bdevs_list": [ 00:08:16.935 { 00:08:16.935 "name": "BaseBdev1", 00:08:16.935 "uuid": "ce436ea7-bfef-4e30-bdda-2e5620fc0062", 00:08:16.935 "is_configured": true, 00:08:16.935 "data_offset": 0, 00:08:16.935 "data_size": 65536 00:08:16.935 }, 00:08:16.935 { 00:08:16.935 "name": "BaseBdev2", 00:08:16.935 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:16.936 "is_configured": false, 00:08:16.936 "data_offset": 0, 00:08:16.936 "data_size": 0 00:08:16.936 } 00:08:16.936 ] 00:08:16.936 }' 00:08:16.936 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:16.936 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.218 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:17.218 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.218 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.477 [2024-10-09 03:11:00.531409] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:17.477 [2024-10-09 03:11:00.531582] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:17.477 [2024-10-09 03:11:00.531613] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:17.477 [2024-10-09 03:11:00.531998] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:17.477 [2024-10-09 03:11:00.532237] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:17.477 [2024-10-09 03:11:00.532291] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:17.477 [2024-10-09 03:11:00.532661] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:17.477 BaseBdev2 00:08:17.477 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.477 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:17.477 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:17.477 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:17.477 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:17.477 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:17.477 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:17.477 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:17.477 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.477 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.477 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.477 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:17.477 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.477 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.477 [ 00:08:17.477 { 00:08:17.477 "name": "BaseBdev2", 00:08:17.477 "aliases": [ 00:08:17.477 "f82b9dfc-dc26-4b50-910d-3b3cef90c69b" 00:08:17.477 ], 00:08:17.477 "product_name": "Malloc disk", 00:08:17.477 "block_size": 512, 00:08:17.477 "num_blocks": 65536, 00:08:17.477 "uuid": "f82b9dfc-dc26-4b50-910d-3b3cef90c69b", 00:08:17.477 "assigned_rate_limits": { 00:08:17.477 "rw_ios_per_sec": 0, 00:08:17.477 "rw_mbytes_per_sec": 0, 00:08:17.477 "r_mbytes_per_sec": 0, 00:08:17.477 "w_mbytes_per_sec": 0 00:08:17.477 }, 00:08:17.477 "claimed": true, 00:08:17.477 "claim_type": "exclusive_write", 00:08:17.477 "zoned": false, 00:08:17.477 "supported_io_types": { 00:08:17.477 "read": true, 00:08:17.477 "write": true, 00:08:17.478 "unmap": true, 00:08:17.478 "flush": true, 00:08:17.478 "reset": true, 00:08:17.478 "nvme_admin": false, 00:08:17.478 "nvme_io": false, 00:08:17.478 "nvme_io_md": false, 00:08:17.478 "write_zeroes": true, 00:08:17.478 "zcopy": true, 00:08:17.478 "get_zone_info": false, 00:08:17.478 "zone_management": false, 00:08:17.478 "zone_append": false, 00:08:17.478 "compare": false, 00:08:17.478 "compare_and_write": false, 00:08:17.478 "abort": true, 00:08:17.478 "seek_hole": false, 00:08:17.478 "seek_data": false, 00:08:17.478 "copy": true, 00:08:17.478 "nvme_iov_md": false 00:08:17.478 }, 00:08:17.478 "memory_domains": [ 00:08:17.478 { 00:08:17.478 "dma_device_id": "system", 00:08:17.478 "dma_device_type": 1 00:08:17.478 }, 00:08:17.478 { 00:08:17.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.478 "dma_device_type": 2 00:08:17.478 } 00:08:17.478 ], 00:08:17.478 "driver_specific": {} 00:08:17.478 } 00:08:17.478 ] 00:08:17.478 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.478 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:17.478 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:17.478 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:17.478 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:17.478 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:17.478 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:17.478 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:17.478 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:17.478 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:17.478 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:17.478 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:17.478 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:17.478 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:17.478 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:17.478 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:17.478 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.478 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.478 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.478 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:17.478 "name": "Existed_Raid", 00:08:17.478 "uuid": "b82fe009-64c4-4256-a9e9-7173a752f814", 00:08:17.478 "strip_size_kb": 64, 00:08:17.478 "state": "online", 00:08:17.478 "raid_level": "concat", 00:08:17.478 "superblock": false, 00:08:17.478 "num_base_bdevs": 2, 00:08:17.478 "num_base_bdevs_discovered": 2, 00:08:17.478 "num_base_bdevs_operational": 2, 00:08:17.478 "base_bdevs_list": [ 00:08:17.478 { 00:08:17.478 "name": "BaseBdev1", 00:08:17.478 "uuid": "ce436ea7-bfef-4e30-bdda-2e5620fc0062", 00:08:17.478 "is_configured": true, 00:08:17.478 "data_offset": 0, 00:08:17.478 "data_size": 65536 00:08:17.478 }, 00:08:17.478 { 00:08:17.478 "name": "BaseBdev2", 00:08:17.478 "uuid": "f82b9dfc-dc26-4b50-910d-3b3cef90c69b", 00:08:17.478 "is_configured": true, 00:08:17.478 "data_offset": 0, 00:08:17.478 "data_size": 65536 00:08:17.478 } 00:08:17.478 ] 00:08:17.478 }' 00:08:17.478 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:17.478 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.738 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:17.738 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:17.738 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:17.738 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:17.738 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:17.738 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:17.738 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:17.738 03:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:17.738 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.738 03:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.738 [2024-10-09 03:11:00.983103] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:17.738 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.738 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:17.738 "name": "Existed_Raid", 00:08:17.738 "aliases": [ 00:08:17.738 "b82fe009-64c4-4256-a9e9-7173a752f814" 00:08:17.738 ], 00:08:17.738 "product_name": "Raid Volume", 00:08:17.738 "block_size": 512, 00:08:17.738 "num_blocks": 131072, 00:08:17.738 "uuid": "b82fe009-64c4-4256-a9e9-7173a752f814", 00:08:17.738 "assigned_rate_limits": { 00:08:17.738 "rw_ios_per_sec": 0, 00:08:17.738 "rw_mbytes_per_sec": 0, 00:08:17.738 "r_mbytes_per_sec": 0, 00:08:17.738 "w_mbytes_per_sec": 0 00:08:17.738 }, 00:08:17.738 "claimed": false, 00:08:17.738 "zoned": false, 00:08:17.738 "supported_io_types": { 00:08:17.738 "read": true, 00:08:17.738 "write": true, 00:08:17.738 "unmap": true, 00:08:17.738 "flush": true, 00:08:17.738 "reset": true, 00:08:17.738 "nvme_admin": false, 00:08:17.738 "nvme_io": false, 00:08:17.738 "nvme_io_md": false, 00:08:17.738 "write_zeroes": true, 00:08:17.738 "zcopy": false, 00:08:17.738 "get_zone_info": false, 00:08:17.738 "zone_management": false, 00:08:17.738 "zone_append": false, 00:08:17.738 "compare": false, 00:08:17.738 "compare_and_write": false, 00:08:17.738 "abort": false, 00:08:17.738 "seek_hole": false, 00:08:17.738 "seek_data": false, 00:08:17.738 "copy": false, 00:08:17.738 "nvme_iov_md": false 00:08:17.738 }, 00:08:17.738 "memory_domains": [ 00:08:17.738 { 00:08:17.738 "dma_device_id": "system", 00:08:17.738 "dma_device_type": 1 00:08:17.738 }, 00:08:17.738 { 00:08:17.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.738 "dma_device_type": 2 00:08:17.738 }, 00:08:17.738 { 00:08:17.738 "dma_device_id": "system", 00:08:17.738 "dma_device_type": 1 00:08:17.738 }, 00:08:17.738 { 00:08:17.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:17.738 "dma_device_type": 2 00:08:17.738 } 00:08:17.738 ], 00:08:17.738 "driver_specific": { 00:08:17.738 "raid": { 00:08:17.738 "uuid": "b82fe009-64c4-4256-a9e9-7173a752f814", 00:08:17.738 "strip_size_kb": 64, 00:08:17.738 "state": "online", 00:08:17.738 "raid_level": "concat", 00:08:17.738 "superblock": false, 00:08:17.738 "num_base_bdevs": 2, 00:08:17.738 "num_base_bdevs_discovered": 2, 00:08:17.738 "num_base_bdevs_operational": 2, 00:08:17.738 "base_bdevs_list": [ 00:08:17.738 { 00:08:17.738 "name": "BaseBdev1", 00:08:17.738 "uuid": "ce436ea7-bfef-4e30-bdda-2e5620fc0062", 00:08:17.738 "is_configured": true, 00:08:17.738 "data_offset": 0, 00:08:17.738 "data_size": 65536 00:08:17.738 }, 00:08:17.738 { 00:08:17.738 "name": "BaseBdev2", 00:08:17.738 "uuid": "f82b9dfc-dc26-4b50-910d-3b3cef90c69b", 00:08:17.738 "is_configured": true, 00:08:17.738 "data_offset": 0, 00:08:17.738 "data_size": 65536 00:08:17.738 } 00:08:17.738 ] 00:08:17.738 } 00:08:17.738 } 00:08:17.738 }' 00:08:17.738 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:17.997 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:17.997 BaseBdev2' 00:08:17.997 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.997 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:17.997 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.998 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.998 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:17.998 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.998 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.998 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.998 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.998 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.998 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:17.998 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:17.998 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.998 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.998 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:17.998 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:17.998 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:17.998 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:17.998 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:17.998 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:17.998 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:17.998 [2024-10-09 03:11:01.226402] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:17.998 [2024-10-09 03:11:01.226530] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:17.998 [2024-10-09 03:11:01.226606] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:18.256 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.256 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:18.256 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:18.256 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:18.256 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:18.256 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:18.256 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:18.256 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:18.256 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:18.257 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:18.257 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:18.257 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:18.257 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:18.257 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:18.257 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:18.257 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:18.257 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.257 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.257 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.257 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:18.257 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.257 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:18.257 "name": "Existed_Raid", 00:08:18.257 "uuid": "b82fe009-64c4-4256-a9e9-7173a752f814", 00:08:18.257 "strip_size_kb": 64, 00:08:18.257 "state": "offline", 00:08:18.257 "raid_level": "concat", 00:08:18.257 "superblock": false, 00:08:18.257 "num_base_bdevs": 2, 00:08:18.257 "num_base_bdevs_discovered": 1, 00:08:18.257 "num_base_bdevs_operational": 1, 00:08:18.257 "base_bdevs_list": [ 00:08:18.257 { 00:08:18.257 "name": null, 00:08:18.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:18.257 "is_configured": false, 00:08:18.257 "data_offset": 0, 00:08:18.257 "data_size": 65536 00:08:18.257 }, 00:08:18.257 { 00:08:18.257 "name": "BaseBdev2", 00:08:18.257 "uuid": "f82b9dfc-dc26-4b50-910d-3b3cef90c69b", 00:08:18.257 "is_configured": true, 00:08:18.257 "data_offset": 0, 00:08:18.257 "data_size": 65536 00:08:18.257 } 00:08:18.257 ] 00:08:18.257 }' 00:08:18.257 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:18.257 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.521 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:18.521 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:18.521 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.521 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.521 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.521 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:18.799 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.799 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:18.799 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:18.799 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:18.799 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.799 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.799 [2024-10-09 03:11:01.864981] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:18.799 [2024-10-09 03:11:01.865153] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:18.799 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.799 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:18.799 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:18.799 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:18.799 03:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:18.799 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:18.799 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.799 03:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:18.799 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:18.799 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:18.799 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:18.799 03:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61794 00:08:18.799 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 61794 ']' 00:08:18.799 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 61794 00:08:18.799 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:18.799 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:18.799 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 61794 00:08:18.799 killing process with pid 61794 00:08:18.799 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:18.799 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:18.799 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 61794' 00:08:18.799 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 61794 00:08:18.799 [2024-10-09 03:11:02.073972] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:18.799 03:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 61794 00:08:18.799 [2024-10-09 03:11:02.093281] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:20.702 03:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:20.702 00:08:20.702 real 0m5.513s 00:08:20.702 user 0m7.665s 00:08:20.702 sys 0m0.939s 00:08:20.702 03:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:20.702 ************************************ 00:08:20.702 END TEST raid_state_function_test 00:08:20.702 ************************************ 00:08:20.702 03:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.702 03:11:03 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:08:20.702 03:11:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:20.702 03:11:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:20.702 03:11:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:20.702 ************************************ 00:08:20.702 START TEST raid_state_function_test_sb 00:08:20.702 ************************************ 00:08:20.702 03:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 2 true 00:08:20.702 03:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:08:20.702 03:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:20.702 03:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:20.702 03:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:20.702 03:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:20.702 03:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:20.702 03:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:20.702 03:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:20.702 03:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:20.702 03:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:20.702 03:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:20.702 03:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:20.702 03:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:20.703 03:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:20.703 03:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:20.703 03:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:20.703 03:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:20.703 03:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:20.703 03:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:08:20.703 03:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:20.703 03:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:20.703 03:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:20.703 Process raid pid: 62053 00:08:20.703 03:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:20.703 03:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62053 00:08:20.703 03:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:20.703 03:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62053' 00:08:20.703 03:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62053 00:08:20.703 03:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 62053 ']' 00:08:20.703 03:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:20.703 03:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:20.703 03:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:20.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:20.703 03:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:20.703 03:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:20.703 [2024-10-09 03:11:03.701951] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:08:20.703 [2024-10-09 03:11:03.702148] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:20.703 [2024-10-09 03:11:03.863719] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:20.962 [2024-10-09 03:11:04.115801] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.220 [2024-10-09 03:11:04.355419] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.220 [2024-10-09 03:11:04.355546] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.220 03:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:21.220 03:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:21.220 03:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:21.220 03:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.220 03:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.479 [2024-10-09 03:11:04.526207] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:21.479 [2024-10-09 03:11:04.526303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:21.479 [2024-10-09 03:11:04.526334] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:21.479 [2024-10-09 03:11:04.526359] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:21.479 03:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.479 03:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:21.479 03:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.479 03:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.479 03:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:21.479 03:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.479 03:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:21.479 03:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.479 03:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.479 03:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.479 03:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.479 03:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.479 03:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.479 03:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.479 03:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.479 03:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.479 03:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.479 "name": "Existed_Raid", 00:08:21.479 "uuid": "6f2ce042-58fc-414a-869f-7a403b106f26", 00:08:21.479 "strip_size_kb": 64, 00:08:21.479 "state": "configuring", 00:08:21.479 "raid_level": "concat", 00:08:21.479 "superblock": true, 00:08:21.479 "num_base_bdevs": 2, 00:08:21.479 "num_base_bdevs_discovered": 0, 00:08:21.479 "num_base_bdevs_operational": 2, 00:08:21.479 "base_bdevs_list": [ 00:08:21.479 { 00:08:21.479 "name": "BaseBdev1", 00:08:21.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.479 "is_configured": false, 00:08:21.479 "data_offset": 0, 00:08:21.479 "data_size": 0 00:08:21.479 }, 00:08:21.479 { 00:08:21.479 "name": "BaseBdev2", 00:08:21.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.479 "is_configured": false, 00:08:21.479 "data_offset": 0, 00:08:21.479 "data_size": 0 00:08:21.479 } 00:08:21.479 ] 00:08:21.479 }' 00:08:21.479 03:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.479 03:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.738 03:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:21.738 03:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.738 03:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.738 [2024-10-09 03:11:04.953444] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:21.738 [2024-10-09 03:11:04.953499] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:21.738 03:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.738 03:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:21.738 03:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.738 03:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.738 [2024-10-09 03:11:04.961419] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:21.738 [2024-10-09 03:11:04.961467] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:21.738 [2024-10-09 03:11:04.961477] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:21.738 [2024-10-09 03:11:04.961490] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:21.738 03:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.738 03:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:21.738 03:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.738 03:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.738 [2024-10-09 03:11:05.021049] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:21.738 BaseBdev1 00:08:21.738 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.738 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:21.738 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:21.738 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:21.738 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:21.738 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:21.738 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:21.738 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:21.738 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.738 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.738 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.738 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:21.738 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.738 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.996 [ 00:08:21.996 { 00:08:21.996 "name": "BaseBdev1", 00:08:21.996 "aliases": [ 00:08:21.996 "644d4848-8177-41b7-b97d-56716975ccf0" 00:08:21.996 ], 00:08:21.996 "product_name": "Malloc disk", 00:08:21.996 "block_size": 512, 00:08:21.996 "num_blocks": 65536, 00:08:21.996 "uuid": "644d4848-8177-41b7-b97d-56716975ccf0", 00:08:21.996 "assigned_rate_limits": { 00:08:21.996 "rw_ios_per_sec": 0, 00:08:21.996 "rw_mbytes_per_sec": 0, 00:08:21.996 "r_mbytes_per_sec": 0, 00:08:21.996 "w_mbytes_per_sec": 0 00:08:21.996 }, 00:08:21.996 "claimed": true, 00:08:21.996 "claim_type": "exclusive_write", 00:08:21.996 "zoned": false, 00:08:21.996 "supported_io_types": { 00:08:21.996 "read": true, 00:08:21.996 "write": true, 00:08:21.996 "unmap": true, 00:08:21.996 "flush": true, 00:08:21.996 "reset": true, 00:08:21.996 "nvme_admin": false, 00:08:21.996 "nvme_io": false, 00:08:21.996 "nvme_io_md": false, 00:08:21.996 "write_zeroes": true, 00:08:21.996 "zcopy": true, 00:08:21.996 "get_zone_info": false, 00:08:21.996 "zone_management": false, 00:08:21.996 "zone_append": false, 00:08:21.996 "compare": false, 00:08:21.996 "compare_and_write": false, 00:08:21.996 "abort": true, 00:08:21.996 "seek_hole": false, 00:08:21.996 "seek_data": false, 00:08:21.996 "copy": true, 00:08:21.996 "nvme_iov_md": false 00:08:21.996 }, 00:08:21.996 "memory_domains": [ 00:08:21.996 { 00:08:21.996 "dma_device_id": "system", 00:08:21.996 "dma_device_type": 1 00:08:21.996 }, 00:08:21.996 { 00:08:21.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:21.996 "dma_device_type": 2 00:08:21.996 } 00:08:21.996 ], 00:08:21.996 "driver_specific": {} 00:08:21.996 } 00:08:21.996 ] 00:08:21.996 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.997 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:21.997 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:21.997 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:21.997 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:21.997 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:21.997 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:21.997 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:21.997 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:21.997 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:21.997 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:21.997 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:21.997 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:21.997 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:21.997 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:21.997 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:21.997 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:21.997 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:21.997 "name": "Existed_Raid", 00:08:21.997 "uuid": "6b6caac9-eb01-4b26-809e-511a5aeaeae0", 00:08:21.997 "strip_size_kb": 64, 00:08:21.997 "state": "configuring", 00:08:21.997 "raid_level": "concat", 00:08:21.997 "superblock": true, 00:08:21.997 "num_base_bdevs": 2, 00:08:21.997 "num_base_bdevs_discovered": 1, 00:08:21.997 "num_base_bdevs_operational": 2, 00:08:21.997 "base_bdevs_list": [ 00:08:21.997 { 00:08:21.997 "name": "BaseBdev1", 00:08:21.997 "uuid": "644d4848-8177-41b7-b97d-56716975ccf0", 00:08:21.997 "is_configured": true, 00:08:21.997 "data_offset": 2048, 00:08:21.997 "data_size": 63488 00:08:21.997 }, 00:08:21.997 { 00:08:21.997 "name": "BaseBdev2", 00:08:21.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:21.997 "is_configured": false, 00:08:21.997 "data_offset": 0, 00:08:21.997 "data_size": 0 00:08:21.997 } 00:08:21.997 ] 00:08:21.997 }' 00:08:21.997 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:21.997 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.255 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:22.255 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.255 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.255 [2024-10-09 03:11:05.524257] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:22.255 [2024-10-09 03:11:05.524332] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:22.255 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.255 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:22.255 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.256 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.256 [2024-10-09 03:11:05.536306] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:22.256 [2024-10-09 03:11:05.538510] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:22.256 [2024-10-09 03:11:05.538558] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:22.256 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.256 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:22.256 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:22.256 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:22.256 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.256 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:22.256 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:22.256 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.256 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.256 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.256 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.256 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.256 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.256 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.256 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.256 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.256 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.514 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.514 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:22.514 "name": "Existed_Raid", 00:08:22.514 "uuid": "b9dfb173-533b-4480-83ab-fd799f7b8fc2", 00:08:22.514 "strip_size_kb": 64, 00:08:22.514 "state": "configuring", 00:08:22.514 "raid_level": "concat", 00:08:22.514 "superblock": true, 00:08:22.514 "num_base_bdevs": 2, 00:08:22.514 "num_base_bdevs_discovered": 1, 00:08:22.514 "num_base_bdevs_operational": 2, 00:08:22.514 "base_bdevs_list": [ 00:08:22.514 { 00:08:22.514 "name": "BaseBdev1", 00:08:22.514 "uuid": "644d4848-8177-41b7-b97d-56716975ccf0", 00:08:22.514 "is_configured": true, 00:08:22.514 "data_offset": 2048, 00:08:22.514 "data_size": 63488 00:08:22.514 }, 00:08:22.514 { 00:08:22.514 "name": "BaseBdev2", 00:08:22.514 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:22.514 "is_configured": false, 00:08:22.514 "data_offset": 0, 00:08:22.514 "data_size": 0 00:08:22.514 } 00:08:22.514 ] 00:08:22.514 }' 00:08:22.514 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:22.514 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.778 03:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:22.778 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.778 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.779 [2024-10-09 03:11:05.999094] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:22.779 [2024-10-09 03:11:05.999425] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:22.779 [2024-10-09 03:11:05.999442] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:22.779 [2024-10-09 03:11:05.999747] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:22.779 BaseBdev2 00:08:22.779 [2024-10-09 03:11:05.999911] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:22.779 [2024-10-09 03:11:05.999925] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:22.779 [2024-10-09 03:11:06.000080] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.779 03:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.779 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:22.779 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:22.779 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:22.779 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:22.779 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:22.779 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:22.779 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:22.779 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.779 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.779 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.779 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:22.779 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.779 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.779 [ 00:08:22.779 { 00:08:22.779 "name": "BaseBdev2", 00:08:22.779 "aliases": [ 00:08:22.779 "cab0516d-b545-4522-908f-a15e34a7360b" 00:08:22.779 ], 00:08:22.779 "product_name": "Malloc disk", 00:08:22.779 "block_size": 512, 00:08:22.780 "num_blocks": 65536, 00:08:22.780 "uuid": "cab0516d-b545-4522-908f-a15e34a7360b", 00:08:22.780 "assigned_rate_limits": { 00:08:22.780 "rw_ios_per_sec": 0, 00:08:22.780 "rw_mbytes_per_sec": 0, 00:08:22.780 "r_mbytes_per_sec": 0, 00:08:22.780 "w_mbytes_per_sec": 0 00:08:22.780 }, 00:08:22.780 "claimed": true, 00:08:22.780 "claim_type": "exclusive_write", 00:08:22.780 "zoned": false, 00:08:22.780 "supported_io_types": { 00:08:22.780 "read": true, 00:08:22.780 "write": true, 00:08:22.780 "unmap": true, 00:08:22.780 "flush": true, 00:08:22.780 "reset": true, 00:08:22.780 "nvme_admin": false, 00:08:22.780 "nvme_io": false, 00:08:22.780 "nvme_io_md": false, 00:08:22.780 "write_zeroes": true, 00:08:22.780 "zcopy": true, 00:08:22.780 "get_zone_info": false, 00:08:22.780 "zone_management": false, 00:08:22.780 "zone_append": false, 00:08:22.780 "compare": false, 00:08:22.780 "compare_and_write": false, 00:08:22.780 "abort": true, 00:08:22.780 "seek_hole": false, 00:08:22.780 "seek_data": false, 00:08:22.780 "copy": true, 00:08:22.780 "nvme_iov_md": false 00:08:22.780 }, 00:08:22.780 "memory_domains": [ 00:08:22.780 { 00:08:22.780 "dma_device_id": "system", 00:08:22.780 "dma_device_type": 1 00:08:22.780 }, 00:08:22.780 { 00:08:22.780 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:22.780 "dma_device_type": 2 00:08:22.780 } 00:08:22.780 ], 00:08:22.780 "driver_specific": {} 00:08:22.780 } 00:08:22.780 ] 00:08:22.781 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.781 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:22.781 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:22.781 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:22.781 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:22.781 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:22.781 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:22.781 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:22.781 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:22.781 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:22.781 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:22.781 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:22.781 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:22.781 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:22.781 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:22.781 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:22.781 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.781 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:22.781 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.043 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.043 "name": "Existed_Raid", 00:08:23.043 "uuid": "b9dfb173-533b-4480-83ab-fd799f7b8fc2", 00:08:23.043 "strip_size_kb": 64, 00:08:23.044 "state": "online", 00:08:23.044 "raid_level": "concat", 00:08:23.044 "superblock": true, 00:08:23.044 "num_base_bdevs": 2, 00:08:23.044 "num_base_bdevs_discovered": 2, 00:08:23.044 "num_base_bdevs_operational": 2, 00:08:23.044 "base_bdevs_list": [ 00:08:23.044 { 00:08:23.044 "name": "BaseBdev1", 00:08:23.044 "uuid": "644d4848-8177-41b7-b97d-56716975ccf0", 00:08:23.044 "is_configured": true, 00:08:23.044 "data_offset": 2048, 00:08:23.044 "data_size": 63488 00:08:23.044 }, 00:08:23.044 { 00:08:23.044 "name": "BaseBdev2", 00:08:23.044 "uuid": "cab0516d-b545-4522-908f-a15e34a7360b", 00:08:23.044 "is_configured": true, 00:08:23.044 "data_offset": 2048, 00:08:23.044 "data_size": 63488 00:08:23.044 } 00:08:23.044 ] 00:08:23.044 }' 00:08:23.044 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.044 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.303 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:23.303 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:23.303 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:23.303 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:23.303 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:23.303 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:23.303 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:23.303 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.303 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:23.303 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.303 [2024-10-09 03:11:06.486681] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:23.303 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.303 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:23.303 "name": "Existed_Raid", 00:08:23.303 "aliases": [ 00:08:23.303 "b9dfb173-533b-4480-83ab-fd799f7b8fc2" 00:08:23.303 ], 00:08:23.303 "product_name": "Raid Volume", 00:08:23.303 "block_size": 512, 00:08:23.303 "num_blocks": 126976, 00:08:23.303 "uuid": "b9dfb173-533b-4480-83ab-fd799f7b8fc2", 00:08:23.303 "assigned_rate_limits": { 00:08:23.303 "rw_ios_per_sec": 0, 00:08:23.303 "rw_mbytes_per_sec": 0, 00:08:23.303 "r_mbytes_per_sec": 0, 00:08:23.303 "w_mbytes_per_sec": 0 00:08:23.303 }, 00:08:23.303 "claimed": false, 00:08:23.303 "zoned": false, 00:08:23.303 "supported_io_types": { 00:08:23.303 "read": true, 00:08:23.303 "write": true, 00:08:23.303 "unmap": true, 00:08:23.303 "flush": true, 00:08:23.303 "reset": true, 00:08:23.303 "nvme_admin": false, 00:08:23.303 "nvme_io": false, 00:08:23.303 "nvme_io_md": false, 00:08:23.303 "write_zeroes": true, 00:08:23.303 "zcopy": false, 00:08:23.303 "get_zone_info": false, 00:08:23.303 "zone_management": false, 00:08:23.303 "zone_append": false, 00:08:23.303 "compare": false, 00:08:23.303 "compare_and_write": false, 00:08:23.303 "abort": false, 00:08:23.303 "seek_hole": false, 00:08:23.303 "seek_data": false, 00:08:23.303 "copy": false, 00:08:23.303 "nvme_iov_md": false 00:08:23.303 }, 00:08:23.303 "memory_domains": [ 00:08:23.303 { 00:08:23.303 "dma_device_id": "system", 00:08:23.303 "dma_device_type": 1 00:08:23.303 }, 00:08:23.303 { 00:08:23.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.303 "dma_device_type": 2 00:08:23.303 }, 00:08:23.303 { 00:08:23.303 "dma_device_id": "system", 00:08:23.303 "dma_device_type": 1 00:08:23.303 }, 00:08:23.303 { 00:08:23.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:23.303 "dma_device_type": 2 00:08:23.303 } 00:08:23.303 ], 00:08:23.303 "driver_specific": { 00:08:23.303 "raid": { 00:08:23.303 "uuid": "b9dfb173-533b-4480-83ab-fd799f7b8fc2", 00:08:23.303 "strip_size_kb": 64, 00:08:23.303 "state": "online", 00:08:23.303 "raid_level": "concat", 00:08:23.303 "superblock": true, 00:08:23.303 "num_base_bdevs": 2, 00:08:23.303 "num_base_bdevs_discovered": 2, 00:08:23.303 "num_base_bdevs_operational": 2, 00:08:23.303 "base_bdevs_list": [ 00:08:23.303 { 00:08:23.303 "name": "BaseBdev1", 00:08:23.303 "uuid": "644d4848-8177-41b7-b97d-56716975ccf0", 00:08:23.303 "is_configured": true, 00:08:23.303 "data_offset": 2048, 00:08:23.303 "data_size": 63488 00:08:23.303 }, 00:08:23.303 { 00:08:23.303 "name": "BaseBdev2", 00:08:23.303 "uuid": "cab0516d-b545-4522-908f-a15e34a7360b", 00:08:23.303 "is_configured": true, 00:08:23.303 "data_offset": 2048, 00:08:23.303 "data_size": 63488 00:08:23.303 } 00:08:23.303 ] 00:08:23.303 } 00:08:23.303 } 00:08:23.303 }' 00:08:23.303 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:23.303 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:23.303 BaseBdev2' 00:08:23.303 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.303 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:23.303 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:23.303 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.303 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:23.303 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.303 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.562 [2024-10-09 03:11:06.678023] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:23.562 [2024-10-09 03:11:06.678098] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:23.562 [2024-10-09 03:11:06.678179] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:23.562 "name": "Existed_Raid", 00:08:23.562 "uuid": "b9dfb173-533b-4480-83ab-fd799f7b8fc2", 00:08:23.562 "strip_size_kb": 64, 00:08:23.562 "state": "offline", 00:08:23.562 "raid_level": "concat", 00:08:23.562 "superblock": true, 00:08:23.562 "num_base_bdevs": 2, 00:08:23.562 "num_base_bdevs_discovered": 1, 00:08:23.562 "num_base_bdevs_operational": 1, 00:08:23.562 "base_bdevs_list": [ 00:08:23.562 { 00:08:23.562 "name": null, 00:08:23.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:23.562 "is_configured": false, 00:08:23.562 "data_offset": 0, 00:08:23.562 "data_size": 63488 00:08:23.562 }, 00:08:23.562 { 00:08:23.562 "name": "BaseBdev2", 00:08:23.562 "uuid": "cab0516d-b545-4522-908f-a15e34a7360b", 00:08:23.562 "is_configured": true, 00:08:23.562 "data_offset": 2048, 00:08:23.562 "data_size": 63488 00:08:23.562 } 00:08:23.562 ] 00:08:23.562 }' 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:23.562 03:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.130 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:24.130 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:24.130 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.130 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.130 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.130 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:24.130 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.130 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:24.130 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:24.130 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:24.130 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.130 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.130 [2024-10-09 03:11:07.246950] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:24.130 [2024-10-09 03:11:07.247012] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:24.130 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.130 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:24.130 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:24.130 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:24.130 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:24.130 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:24.130 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:24.130 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:24.130 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:24.130 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:24.130 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:24.130 03:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62053 00:08:24.130 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 62053 ']' 00:08:24.130 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 62053 00:08:24.130 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:24.130 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:24.130 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62053 00:08:24.130 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:24.130 killing process with pid 62053 00:08:24.130 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:24.130 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62053' 00:08:24.130 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 62053 00:08:24.130 [2024-10-09 03:11:07.430090] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:24.130 03:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 62053 00:08:24.389 [2024-10-09 03:11:07.450075] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:25.781 ************************************ 00:08:25.781 END TEST raid_state_function_test_sb 00:08:25.781 ************************************ 00:08:25.781 03:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:25.781 00:08:25.781 real 0m5.260s 00:08:25.781 user 0m7.269s 00:08:25.781 sys 0m0.919s 00:08:25.781 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:25.781 03:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:25.781 03:11:08 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:08:25.781 03:11:08 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:25.781 03:11:08 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:25.781 03:11:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:25.781 ************************************ 00:08:25.781 START TEST raid_superblock_test 00:08:25.781 ************************************ 00:08:25.781 03:11:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 2 00:08:25.781 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:08:25.781 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:25.781 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:25.781 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:25.781 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:25.781 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:25.781 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:25.781 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:25.781 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:25.781 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:25.781 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:25.781 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:25.781 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:25.781 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:08:25.781 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:25.781 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:25.781 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62305 00:08:25.781 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:25.781 03:11:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62305 00:08:25.781 03:11:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 62305 ']' 00:08:25.781 03:11:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.781 03:11:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:25.781 03:11:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.781 03:11:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:25.781 03:11:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.781 [2024-10-09 03:11:09.013933] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:08:25.781 [2024-10-09 03:11:09.014160] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62305 ] 00:08:26.040 [2024-10-09 03:11:09.179034] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.300 [2024-10-09 03:11:09.445793] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.560 [2024-10-09 03:11:09.689736] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.560 [2024-10-09 03:11:09.689917] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.560 03:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:26.560 03:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:26.560 03:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:26.560 03:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:26.560 03:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:26.560 03:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:26.560 03:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:26.560 03:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:26.560 03:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:26.560 03:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:26.560 03:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:26.560 03:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.560 03:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.820 malloc1 00:08:26.820 03:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.820 03:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:26.820 03:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.820 03:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.820 [2024-10-09 03:11:09.910508] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:26.820 [2024-10-09 03:11:09.910637] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.820 [2024-10-09 03:11:09.910683] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:26.820 [2024-10-09 03:11:09.910719] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.820 [2024-10-09 03:11:09.913272] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.820 [2024-10-09 03:11:09.913344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:26.820 pt1 00:08:26.820 03:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.820 03:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:26.820 03:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:26.820 03:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:26.820 03:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:26.820 03:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:26.820 03:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:26.820 03:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:26.820 03:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:26.820 03:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:26.820 03:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.820 03:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.820 malloc2 00:08:26.820 03:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.820 03:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:26.820 03:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.820 03:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.820 [2024-10-09 03:11:09.983243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:26.820 [2024-10-09 03:11:09.983315] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:26.820 [2024-10-09 03:11:09.983344] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:26.820 [2024-10-09 03:11:09.983354] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:26.820 [2024-10-09 03:11:09.985904] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:26.820 [2024-10-09 03:11:09.985939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:26.820 pt2 00:08:26.820 03:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.820 03:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:26.820 03:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:26.820 03:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:26.820 03:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.820 03:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.820 [2024-10-09 03:11:09.995290] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:26.820 [2024-10-09 03:11:09.997525] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:26.820 [2024-10-09 03:11:09.997776] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:26.820 [2024-10-09 03:11:09.997794] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:26.820 [2024-10-09 03:11:09.998087] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:26.820 [2024-10-09 03:11:09.998256] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:26.820 [2024-10-09 03:11:09.998269] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:26.820 [2024-10-09 03:11:09.998449] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:26.820 03:11:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.820 03:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:26.821 03:11:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:26.821 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:26.821 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:26.821 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:26.821 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:26.821 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:26.821 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:26.821 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:26.821 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:26.821 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:26.821 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:26.821 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.821 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:26.821 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.821 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:26.821 "name": "raid_bdev1", 00:08:26.821 "uuid": "06376bed-d061-421c-bc99-b0b21f015cdc", 00:08:26.821 "strip_size_kb": 64, 00:08:26.821 "state": "online", 00:08:26.821 "raid_level": "concat", 00:08:26.821 "superblock": true, 00:08:26.821 "num_base_bdevs": 2, 00:08:26.821 "num_base_bdevs_discovered": 2, 00:08:26.821 "num_base_bdevs_operational": 2, 00:08:26.821 "base_bdevs_list": [ 00:08:26.821 { 00:08:26.821 "name": "pt1", 00:08:26.821 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:26.821 "is_configured": true, 00:08:26.821 "data_offset": 2048, 00:08:26.821 "data_size": 63488 00:08:26.821 }, 00:08:26.821 { 00:08:26.821 "name": "pt2", 00:08:26.821 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:26.821 "is_configured": true, 00:08:26.821 "data_offset": 2048, 00:08:26.821 "data_size": 63488 00:08:26.821 } 00:08:26.821 ] 00:08:26.821 }' 00:08:26.821 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:26.821 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.390 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:27.390 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:27.390 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:27.390 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:27.390 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:27.390 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:27.390 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:27.390 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:27.390 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.390 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.390 [2024-10-09 03:11:10.474787] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:27.390 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.390 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:27.390 "name": "raid_bdev1", 00:08:27.390 "aliases": [ 00:08:27.390 "06376bed-d061-421c-bc99-b0b21f015cdc" 00:08:27.390 ], 00:08:27.390 "product_name": "Raid Volume", 00:08:27.390 "block_size": 512, 00:08:27.390 "num_blocks": 126976, 00:08:27.390 "uuid": "06376bed-d061-421c-bc99-b0b21f015cdc", 00:08:27.390 "assigned_rate_limits": { 00:08:27.390 "rw_ios_per_sec": 0, 00:08:27.390 "rw_mbytes_per_sec": 0, 00:08:27.390 "r_mbytes_per_sec": 0, 00:08:27.390 "w_mbytes_per_sec": 0 00:08:27.390 }, 00:08:27.390 "claimed": false, 00:08:27.390 "zoned": false, 00:08:27.390 "supported_io_types": { 00:08:27.390 "read": true, 00:08:27.390 "write": true, 00:08:27.390 "unmap": true, 00:08:27.390 "flush": true, 00:08:27.390 "reset": true, 00:08:27.390 "nvme_admin": false, 00:08:27.390 "nvme_io": false, 00:08:27.390 "nvme_io_md": false, 00:08:27.390 "write_zeroes": true, 00:08:27.390 "zcopy": false, 00:08:27.390 "get_zone_info": false, 00:08:27.390 "zone_management": false, 00:08:27.390 "zone_append": false, 00:08:27.390 "compare": false, 00:08:27.390 "compare_and_write": false, 00:08:27.390 "abort": false, 00:08:27.390 "seek_hole": false, 00:08:27.390 "seek_data": false, 00:08:27.390 "copy": false, 00:08:27.390 "nvme_iov_md": false 00:08:27.390 }, 00:08:27.390 "memory_domains": [ 00:08:27.390 { 00:08:27.390 "dma_device_id": "system", 00:08:27.390 "dma_device_type": 1 00:08:27.390 }, 00:08:27.390 { 00:08:27.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.390 "dma_device_type": 2 00:08:27.390 }, 00:08:27.390 { 00:08:27.390 "dma_device_id": "system", 00:08:27.390 "dma_device_type": 1 00:08:27.390 }, 00:08:27.390 { 00:08:27.390 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:27.390 "dma_device_type": 2 00:08:27.390 } 00:08:27.390 ], 00:08:27.390 "driver_specific": { 00:08:27.390 "raid": { 00:08:27.390 "uuid": "06376bed-d061-421c-bc99-b0b21f015cdc", 00:08:27.390 "strip_size_kb": 64, 00:08:27.390 "state": "online", 00:08:27.390 "raid_level": "concat", 00:08:27.390 "superblock": true, 00:08:27.390 "num_base_bdevs": 2, 00:08:27.390 "num_base_bdevs_discovered": 2, 00:08:27.390 "num_base_bdevs_operational": 2, 00:08:27.390 "base_bdevs_list": [ 00:08:27.390 { 00:08:27.390 "name": "pt1", 00:08:27.390 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:27.390 "is_configured": true, 00:08:27.390 "data_offset": 2048, 00:08:27.390 "data_size": 63488 00:08:27.390 }, 00:08:27.390 { 00:08:27.390 "name": "pt2", 00:08:27.390 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:27.390 "is_configured": true, 00:08:27.390 "data_offset": 2048, 00:08:27.391 "data_size": 63488 00:08:27.391 } 00:08:27.391 ] 00:08:27.391 } 00:08:27.391 } 00:08:27.391 }' 00:08:27.391 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:27.391 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:27.391 pt2' 00:08:27.391 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.391 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:27.391 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:27.391 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.391 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:27.391 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.391 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.391 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.391 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:27.391 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:27.391 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:27.391 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:27.391 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.391 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.391 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:27.391 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.391 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:27.391 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:27.391 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:27.391 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:27.391 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.391 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.651 [2024-10-09 03:11:10.698358] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=06376bed-d061-421c-bc99-b0b21f015cdc 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 06376bed-d061-421c-bc99-b0b21f015cdc ']' 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.651 [2024-10-09 03:11:10.738033] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:27.651 [2024-10-09 03:11:10.738075] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:27.651 [2024-10-09 03:11:10.738210] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:27.651 [2024-10-09 03:11:10.738267] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:27.651 [2024-10-09 03:11:10.738283] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.651 [2024-10-09 03:11:10.877832] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:27.651 [2024-10-09 03:11:10.880105] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:27.651 [2024-10-09 03:11:10.880219] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:27.651 [2024-10-09 03:11:10.880318] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:27.651 [2024-10-09 03:11:10.880371] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:27.651 [2024-10-09 03:11:10.880414] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:27.651 request: 00:08:27.651 { 00:08:27.651 "name": "raid_bdev1", 00:08:27.651 "raid_level": "concat", 00:08:27.651 "base_bdevs": [ 00:08:27.651 "malloc1", 00:08:27.651 "malloc2" 00:08:27.651 ], 00:08:27.651 "strip_size_kb": 64, 00:08:27.651 "superblock": false, 00:08:27.651 "method": "bdev_raid_create", 00:08:27.651 "req_id": 1 00:08:27.651 } 00:08:27.651 Got JSON-RPC error response 00:08:27.651 response: 00:08:27.651 { 00:08:27.651 "code": -17, 00:08:27.651 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:27.651 } 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:27.651 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:27.652 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:27.652 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:27.652 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:27.652 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.652 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:27.652 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.652 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.652 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.652 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:27.652 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:27.652 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:27.652 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.652 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.652 [2024-10-09 03:11:10.945671] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:27.652 [2024-10-09 03:11:10.945750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:27.652 [2024-10-09 03:11:10.945775] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:27.652 [2024-10-09 03:11:10.945787] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:27.652 [2024-10-09 03:11:10.948353] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:27.652 [2024-10-09 03:11:10.948393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:27.652 [2024-10-09 03:11:10.948500] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:27.652 [2024-10-09 03:11:10.948574] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:27.652 pt1 00:08:27.652 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.652 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:08:27.652 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:27.652 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:27.652 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:27.652 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:27.652 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:27.652 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:27.652 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:27.652 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:27.652 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:27.912 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:27.913 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:27.913 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.913 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.913 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.913 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:27.913 "name": "raid_bdev1", 00:08:27.913 "uuid": "06376bed-d061-421c-bc99-b0b21f015cdc", 00:08:27.913 "strip_size_kb": 64, 00:08:27.913 "state": "configuring", 00:08:27.913 "raid_level": "concat", 00:08:27.913 "superblock": true, 00:08:27.913 "num_base_bdevs": 2, 00:08:27.913 "num_base_bdevs_discovered": 1, 00:08:27.913 "num_base_bdevs_operational": 2, 00:08:27.913 "base_bdevs_list": [ 00:08:27.913 { 00:08:27.913 "name": "pt1", 00:08:27.913 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:27.913 "is_configured": true, 00:08:27.913 "data_offset": 2048, 00:08:27.913 "data_size": 63488 00:08:27.913 }, 00:08:27.913 { 00:08:27.913 "name": null, 00:08:27.913 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:27.913 "is_configured": false, 00:08:27.913 "data_offset": 2048, 00:08:27.913 "data_size": 63488 00:08:27.913 } 00:08:27.913 ] 00:08:27.913 }' 00:08:27.913 03:11:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:27.913 03:11:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.173 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:28.173 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:28.173 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:28.173 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:28.173 03:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.173 03:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.173 [2024-10-09 03:11:11.357006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:28.173 [2024-10-09 03:11:11.357182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:28.173 [2024-10-09 03:11:11.357226] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:28.173 [2024-10-09 03:11:11.357262] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:28.173 [2024-10-09 03:11:11.357895] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:28.173 [2024-10-09 03:11:11.357970] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:28.173 [2024-10-09 03:11:11.358107] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:28.173 [2024-10-09 03:11:11.358165] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:28.173 [2024-10-09 03:11:11.358339] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:28.173 [2024-10-09 03:11:11.358381] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:28.173 [2024-10-09 03:11:11.358671] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:28.173 [2024-10-09 03:11:11.358890] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:28.173 [2024-10-09 03:11:11.358932] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:28.173 [2024-10-09 03:11:11.359118] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:28.173 pt2 00:08:28.173 03:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.173 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:28.173 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:28.173 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:28.173 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:28.173 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:28.173 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:28.173 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:28.173 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:28.173 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:28.173 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:28.173 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:28.173 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:28.173 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:28.173 03:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.173 03:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.173 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:28.173 03:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.173 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:28.173 "name": "raid_bdev1", 00:08:28.173 "uuid": "06376bed-d061-421c-bc99-b0b21f015cdc", 00:08:28.173 "strip_size_kb": 64, 00:08:28.173 "state": "online", 00:08:28.173 "raid_level": "concat", 00:08:28.173 "superblock": true, 00:08:28.173 "num_base_bdevs": 2, 00:08:28.173 "num_base_bdevs_discovered": 2, 00:08:28.173 "num_base_bdevs_operational": 2, 00:08:28.173 "base_bdevs_list": [ 00:08:28.173 { 00:08:28.173 "name": "pt1", 00:08:28.173 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:28.173 "is_configured": true, 00:08:28.173 "data_offset": 2048, 00:08:28.173 "data_size": 63488 00:08:28.173 }, 00:08:28.173 { 00:08:28.173 "name": "pt2", 00:08:28.174 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:28.174 "is_configured": true, 00:08:28.174 "data_offset": 2048, 00:08:28.174 "data_size": 63488 00:08:28.174 } 00:08:28.174 ] 00:08:28.174 }' 00:08:28.174 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:28.174 03:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.744 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:28.744 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:28.744 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:28.744 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:28.744 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:28.744 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:28.744 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:28.744 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:28.744 03:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.744 03:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.744 [2024-10-09 03:11:11.752581] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:28.744 03:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.744 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:28.744 "name": "raid_bdev1", 00:08:28.744 "aliases": [ 00:08:28.744 "06376bed-d061-421c-bc99-b0b21f015cdc" 00:08:28.744 ], 00:08:28.744 "product_name": "Raid Volume", 00:08:28.744 "block_size": 512, 00:08:28.744 "num_blocks": 126976, 00:08:28.744 "uuid": "06376bed-d061-421c-bc99-b0b21f015cdc", 00:08:28.744 "assigned_rate_limits": { 00:08:28.744 "rw_ios_per_sec": 0, 00:08:28.744 "rw_mbytes_per_sec": 0, 00:08:28.744 "r_mbytes_per_sec": 0, 00:08:28.744 "w_mbytes_per_sec": 0 00:08:28.744 }, 00:08:28.744 "claimed": false, 00:08:28.744 "zoned": false, 00:08:28.744 "supported_io_types": { 00:08:28.744 "read": true, 00:08:28.744 "write": true, 00:08:28.744 "unmap": true, 00:08:28.744 "flush": true, 00:08:28.744 "reset": true, 00:08:28.744 "nvme_admin": false, 00:08:28.744 "nvme_io": false, 00:08:28.744 "nvme_io_md": false, 00:08:28.744 "write_zeroes": true, 00:08:28.744 "zcopy": false, 00:08:28.744 "get_zone_info": false, 00:08:28.744 "zone_management": false, 00:08:28.744 "zone_append": false, 00:08:28.744 "compare": false, 00:08:28.744 "compare_and_write": false, 00:08:28.744 "abort": false, 00:08:28.744 "seek_hole": false, 00:08:28.744 "seek_data": false, 00:08:28.744 "copy": false, 00:08:28.744 "nvme_iov_md": false 00:08:28.744 }, 00:08:28.744 "memory_domains": [ 00:08:28.744 { 00:08:28.744 "dma_device_id": "system", 00:08:28.744 "dma_device_type": 1 00:08:28.744 }, 00:08:28.744 { 00:08:28.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.744 "dma_device_type": 2 00:08:28.744 }, 00:08:28.744 { 00:08:28.744 "dma_device_id": "system", 00:08:28.744 "dma_device_type": 1 00:08:28.744 }, 00:08:28.744 { 00:08:28.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:28.744 "dma_device_type": 2 00:08:28.744 } 00:08:28.744 ], 00:08:28.744 "driver_specific": { 00:08:28.744 "raid": { 00:08:28.744 "uuid": "06376bed-d061-421c-bc99-b0b21f015cdc", 00:08:28.744 "strip_size_kb": 64, 00:08:28.744 "state": "online", 00:08:28.744 "raid_level": "concat", 00:08:28.744 "superblock": true, 00:08:28.744 "num_base_bdevs": 2, 00:08:28.744 "num_base_bdevs_discovered": 2, 00:08:28.744 "num_base_bdevs_operational": 2, 00:08:28.744 "base_bdevs_list": [ 00:08:28.744 { 00:08:28.744 "name": "pt1", 00:08:28.744 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:28.744 "is_configured": true, 00:08:28.744 "data_offset": 2048, 00:08:28.744 "data_size": 63488 00:08:28.744 }, 00:08:28.744 { 00:08:28.744 "name": "pt2", 00:08:28.744 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:28.744 "is_configured": true, 00:08:28.744 "data_offset": 2048, 00:08:28.744 "data_size": 63488 00:08:28.744 } 00:08:28.744 ] 00:08:28.744 } 00:08:28.744 } 00:08:28.744 }' 00:08:28.744 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:28.744 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:28.744 pt2' 00:08:28.744 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.744 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:28.744 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.744 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:28.744 03:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.744 03:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.744 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.744 03:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.744 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.744 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.744 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:28.744 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:28.744 03:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.744 03:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.744 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:28.744 03:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.744 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:28.744 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:28.744 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:28.744 03:11:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:28.745 03:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:28.745 03:11:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.745 [2024-10-09 03:11:11.984129] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:28.745 03:11:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:28.745 03:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 06376bed-d061-421c-bc99-b0b21f015cdc '!=' 06376bed-d061-421c-bc99-b0b21f015cdc ']' 00:08:28.745 03:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:08:28.745 03:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:28.745 03:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:28.745 03:11:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62305 00:08:28.745 03:11:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 62305 ']' 00:08:28.745 03:11:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 62305 00:08:28.745 03:11:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:28.745 03:11:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:28.745 03:11:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62305 00:08:29.004 killing process with pid 62305 00:08:29.004 03:11:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:29.004 03:11:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:29.004 03:11:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62305' 00:08:29.004 03:11:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 62305 00:08:29.004 [2024-10-09 03:11:12.054411] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:29.004 [2024-10-09 03:11:12.054517] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:29.004 03:11:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 62305 00:08:29.004 [2024-10-09 03:11:12.054574] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:29.004 [2024-10-09 03:11:12.054586] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:29.005 [2024-10-09 03:11:12.280413] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:30.395 ************************************ 00:08:30.395 END TEST raid_superblock_test 00:08:30.395 ************************************ 00:08:30.395 03:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:30.395 00:08:30.395 real 0m4.718s 00:08:30.395 user 0m6.311s 00:08:30.395 sys 0m0.866s 00:08:30.395 03:11:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:30.395 03:11:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.680 03:11:13 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:08:30.680 03:11:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:30.680 03:11:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:30.680 03:11:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:30.680 ************************************ 00:08:30.680 START TEST raid_read_error_test 00:08:30.680 ************************************ 00:08:30.680 03:11:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 read 00:08:30.680 03:11:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:30.680 03:11:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:30.680 03:11:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:30.680 03:11:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:30.680 03:11:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:30.680 03:11:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:30.680 03:11:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:30.680 03:11:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:30.680 03:11:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:30.680 03:11:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:30.680 03:11:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:30.680 03:11:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:30.680 03:11:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:30.680 03:11:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:30.680 03:11:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:30.680 03:11:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:30.680 03:11:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:30.680 03:11:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:30.680 03:11:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:30.680 03:11:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:30.680 03:11:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:30.680 03:11:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:30.680 03:11:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.GMktGJ0iLs 00:08:30.680 03:11:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62511 00:08:30.680 03:11:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:30.680 03:11:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62511 00:08:30.680 03:11:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 62511 ']' 00:08:30.680 03:11:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.680 03:11:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:30.680 03:11:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.680 03:11:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:30.680 03:11:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.680 [2024-10-09 03:11:13.827324] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:08:30.680 [2024-10-09 03:11:13.827515] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62511 ] 00:08:30.939 [2024-10-09 03:11:13.995600] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.198 [2024-10-09 03:11:14.248402] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.198 [2024-10-09 03:11:14.479586] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.198 [2024-10-09 03:11:14.479725] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.457 03:11:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:31.457 03:11:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:31.457 03:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:31.457 03:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:31.457 03:11:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.457 03:11:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.457 BaseBdev1_malloc 00:08:31.457 03:11:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.457 03:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:31.457 03:11:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.457 03:11:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.457 true 00:08:31.457 03:11:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.457 03:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:31.457 03:11:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.457 03:11:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.457 [2024-10-09 03:11:14.704232] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:31.457 [2024-10-09 03:11:14.704330] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.457 [2024-10-09 03:11:14.704366] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:31.457 [2024-10-09 03:11:14.704395] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.457 [2024-10-09 03:11:14.706813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.457 [2024-10-09 03:11:14.706901] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:31.457 BaseBdev1 00:08:31.457 03:11:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.457 03:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:31.457 03:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:31.457 03:11:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.457 03:11:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.717 BaseBdev2_malloc 00:08:31.717 03:11:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.717 03:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:31.717 03:11:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.717 03:11:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.717 true 00:08:31.717 03:11:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.717 03:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:31.717 03:11:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.717 03:11:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.717 [2024-10-09 03:11:14.786030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:31.717 [2024-10-09 03:11:14.786088] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:31.717 [2024-10-09 03:11:14.786104] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:31.717 [2024-10-09 03:11:14.786115] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:31.717 [2024-10-09 03:11:14.788481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:31.717 [2024-10-09 03:11:14.788523] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:31.717 BaseBdev2 00:08:31.717 03:11:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.717 03:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:31.717 03:11:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.717 03:11:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.717 [2024-10-09 03:11:14.798089] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:31.717 [2024-10-09 03:11:14.800206] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:31.717 [2024-10-09 03:11:14.800407] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:31.717 [2024-10-09 03:11:14.800423] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:31.717 [2024-10-09 03:11:14.800658] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:31.717 [2024-10-09 03:11:14.800845] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:31.717 [2024-10-09 03:11:14.800881] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:31.717 [2024-10-09 03:11:14.801060] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:31.717 03:11:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.717 03:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:31.717 03:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:31.717 03:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:31.717 03:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:31.717 03:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:31.717 03:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:31.717 03:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:31.717 03:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:31.717 03:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:31.717 03:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:31.717 03:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:31.717 03:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:31.717 03:11:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:31.717 03:11:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.717 03:11:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:31.717 03:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:31.717 "name": "raid_bdev1", 00:08:31.717 "uuid": "1337974e-c93d-4171-bde9-1b8a49d55c38", 00:08:31.717 "strip_size_kb": 64, 00:08:31.717 "state": "online", 00:08:31.717 "raid_level": "concat", 00:08:31.717 "superblock": true, 00:08:31.717 "num_base_bdevs": 2, 00:08:31.717 "num_base_bdevs_discovered": 2, 00:08:31.717 "num_base_bdevs_operational": 2, 00:08:31.717 "base_bdevs_list": [ 00:08:31.717 { 00:08:31.717 "name": "BaseBdev1", 00:08:31.717 "uuid": "e637596d-3703-5fba-bb52-cdcf26b26b5c", 00:08:31.717 "is_configured": true, 00:08:31.717 "data_offset": 2048, 00:08:31.717 "data_size": 63488 00:08:31.717 }, 00:08:31.717 { 00:08:31.717 "name": "BaseBdev2", 00:08:31.717 "uuid": "7d7aa84b-9b15-52b9-bf24-caacac106815", 00:08:31.717 "is_configured": true, 00:08:31.717 "data_offset": 2048, 00:08:31.717 "data_size": 63488 00:08:31.717 } 00:08:31.717 ] 00:08:31.717 }' 00:08:31.717 03:11:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:31.717 03:11:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.976 03:11:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:31.976 03:11:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:32.235 [2024-10-09 03:11:15.326497] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:33.172 03:11:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:33.172 03:11:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.172 03:11:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.172 03:11:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.172 03:11:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:33.172 03:11:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:33.172 03:11:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:33.172 03:11:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:33.172 03:11:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:33.172 03:11:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:33.172 03:11:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:33.172 03:11:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:33.172 03:11:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:33.172 03:11:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:33.172 03:11:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:33.172 03:11:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:33.172 03:11:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:33.172 03:11:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:33.172 03:11:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:33.172 03:11:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.172 03:11:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.172 03:11:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.172 03:11:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:33.172 "name": "raid_bdev1", 00:08:33.172 "uuid": "1337974e-c93d-4171-bde9-1b8a49d55c38", 00:08:33.172 "strip_size_kb": 64, 00:08:33.172 "state": "online", 00:08:33.172 "raid_level": "concat", 00:08:33.172 "superblock": true, 00:08:33.172 "num_base_bdevs": 2, 00:08:33.172 "num_base_bdevs_discovered": 2, 00:08:33.172 "num_base_bdevs_operational": 2, 00:08:33.172 "base_bdevs_list": [ 00:08:33.172 { 00:08:33.172 "name": "BaseBdev1", 00:08:33.172 "uuid": "e637596d-3703-5fba-bb52-cdcf26b26b5c", 00:08:33.172 "is_configured": true, 00:08:33.172 "data_offset": 2048, 00:08:33.172 "data_size": 63488 00:08:33.172 }, 00:08:33.172 { 00:08:33.172 "name": "BaseBdev2", 00:08:33.172 "uuid": "7d7aa84b-9b15-52b9-bf24-caacac106815", 00:08:33.172 "is_configured": true, 00:08:33.172 "data_offset": 2048, 00:08:33.172 "data_size": 63488 00:08:33.172 } 00:08:33.172 ] 00:08:33.172 }' 00:08:33.172 03:11:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:33.172 03:11:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.432 03:11:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:33.432 03:11:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:33.432 03:11:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.432 [2024-10-09 03:11:16.690878] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:33.432 [2024-10-09 03:11:16.690927] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:33.432 [2024-10-09 03:11:16.693523] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:33.432 [2024-10-09 03:11:16.693574] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:33.432 [2024-10-09 03:11:16.693609] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:33.432 [2024-10-09 03:11:16.693622] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:33.432 { 00:08:33.432 "results": [ 00:08:33.432 { 00:08:33.432 "job": "raid_bdev1", 00:08:33.432 "core_mask": "0x1", 00:08:33.432 "workload": "randrw", 00:08:33.432 "percentage": 50, 00:08:33.432 "status": "finished", 00:08:33.432 "queue_depth": 1, 00:08:33.432 "io_size": 131072, 00:08:33.432 "runtime": 1.365058, 00:08:33.432 "iops": 14893.872641309013, 00:08:33.432 "mibps": 1861.7340801636267, 00:08:33.432 "io_failed": 1, 00:08:33.432 "io_timeout": 0, 00:08:33.432 "avg_latency_us": 94.17081787308838, 00:08:33.432 "min_latency_us": 25.9353711790393, 00:08:33.432 "max_latency_us": 1345.0620087336245 00:08:33.432 } 00:08:33.432 ], 00:08:33.432 "core_count": 1 00:08:33.432 } 00:08:33.432 03:11:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.432 03:11:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62511 00:08:33.432 03:11:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 62511 ']' 00:08:33.432 03:11:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 62511 00:08:33.432 03:11:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:33.432 03:11:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:33.432 03:11:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62511 00:08:33.432 killing process with pid 62511 00:08:33.432 03:11:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:33.432 03:11:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:33.432 03:11:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62511' 00:08:33.432 03:11:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 62511 00:08:33.432 [2024-10-09 03:11:16.732357] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:33.432 03:11:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 62511 00:08:33.692 [2024-10-09 03:11:16.880546] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:35.072 03:11:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.GMktGJ0iLs 00:08:35.072 03:11:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:35.072 03:11:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:35.072 03:11:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:08:35.072 03:11:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:35.072 ************************************ 00:08:35.072 END TEST raid_read_error_test 00:08:35.072 ************************************ 00:08:35.072 03:11:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:35.072 03:11:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:35.072 03:11:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:08:35.072 00:08:35.072 real 0m4.606s 00:08:35.072 user 0m5.314s 00:08:35.072 sys 0m0.656s 00:08:35.072 03:11:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:35.072 03:11:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.072 03:11:18 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:35.072 03:11:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:35.072 03:11:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:35.072 03:11:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:35.331 ************************************ 00:08:35.331 START TEST raid_write_error_test 00:08:35.331 ************************************ 00:08:35.331 03:11:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 2 write 00:08:35.331 03:11:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:08:35.331 03:11:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:35.331 03:11:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:35.331 03:11:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:35.331 03:11:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:35.331 03:11:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:35.331 03:11:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:35.331 03:11:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:35.331 03:11:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:35.331 03:11:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:35.331 03:11:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:35.331 03:11:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:35.331 03:11:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:35.331 03:11:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:35.331 03:11:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:35.331 03:11:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:35.331 03:11:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:35.331 03:11:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:35.331 03:11:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:08:35.331 03:11:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:35.331 03:11:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:35.331 03:11:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:35.331 03:11:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.1S5t23iic4 00:08:35.331 03:11:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62662 00:08:35.331 03:11:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62662 00:08:35.331 03:11:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:35.331 03:11:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 62662 ']' 00:08:35.332 03:11:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.332 03:11:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:35.332 03:11:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.332 03:11:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:35.332 03:11:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.332 [2024-10-09 03:11:18.493047] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:08:35.332 [2024-10-09 03:11:18.493246] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62662 ] 00:08:35.591 [2024-10-09 03:11:18.657164] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.850 [2024-10-09 03:11:18.899010] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.850 [2024-10-09 03:11:19.133597] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:35.850 [2024-10-09 03:11:19.133647] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.110 03:11:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:36.110 03:11:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:36.110 03:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:36.110 03:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:36.110 03:11:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.110 03:11:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.110 BaseBdev1_malloc 00:08:36.110 03:11:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.110 03:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:36.110 03:11:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.110 03:11:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.110 true 00:08:36.110 03:11:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.110 03:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:36.110 03:11:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.110 03:11:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.110 [2024-10-09 03:11:19.385438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:36.110 [2024-10-09 03:11:19.385504] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.110 [2024-10-09 03:11:19.385522] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:36.110 [2024-10-09 03:11:19.385534] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.110 [2024-10-09 03:11:19.388004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.110 [2024-10-09 03:11:19.388126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:36.110 BaseBdev1 00:08:36.110 03:11:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.110 03:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:36.110 03:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:36.110 03:11:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.110 03:11:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.370 BaseBdev2_malloc 00:08:36.370 03:11:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.370 03:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:36.370 03:11:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.370 03:11:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.370 true 00:08:36.370 03:11:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.370 03:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:36.370 03:11:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.370 03:11:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.370 [2024-10-09 03:11:19.468405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:36.370 [2024-10-09 03:11:19.468463] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:36.370 [2024-10-09 03:11:19.468480] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:36.370 [2024-10-09 03:11:19.468490] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:36.370 [2024-10-09 03:11:19.470868] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:36.370 [2024-10-09 03:11:19.470903] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:36.370 BaseBdev2 00:08:36.370 03:11:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.370 03:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:36.370 03:11:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.370 03:11:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.370 [2024-10-09 03:11:19.480462] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:36.370 [2024-10-09 03:11:19.482536] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:36.370 [2024-10-09 03:11:19.482803] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:36.370 [2024-10-09 03:11:19.482823] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:36.370 [2024-10-09 03:11:19.483068] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:36.370 [2024-10-09 03:11:19.483234] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:36.370 [2024-10-09 03:11:19.483245] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:36.370 [2024-10-09 03:11:19.483391] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:36.370 03:11:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.370 03:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:36.370 03:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:36.370 03:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:36.370 03:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:36.370 03:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.370 03:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:36.370 03:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.370 03:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.370 03:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.370 03:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.370 03:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.370 03:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:36.370 03:11:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:36.370 03:11:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.370 03:11:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:36.370 03:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.370 "name": "raid_bdev1", 00:08:36.370 "uuid": "4bb88730-1039-4cc9-ac19-b51c3bc1cdb2", 00:08:36.370 "strip_size_kb": 64, 00:08:36.370 "state": "online", 00:08:36.370 "raid_level": "concat", 00:08:36.370 "superblock": true, 00:08:36.370 "num_base_bdevs": 2, 00:08:36.370 "num_base_bdevs_discovered": 2, 00:08:36.370 "num_base_bdevs_operational": 2, 00:08:36.370 "base_bdevs_list": [ 00:08:36.370 { 00:08:36.370 "name": "BaseBdev1", 00:08:36.370 "uuid": "f864b171-49a3-5d35-a5c3-4928d69216d3", 00:08:36.370 "is_configured": true, 00:08:36.370 "data_offset": 2048, 00:08:36.370 "data_size": 63488 00:08:36.370 }, 00:08:36.370 { 00:08:36.370 "name": "BaseBdev2", 00:08:36.370 "uuid": "9993dd57-19cf-581b-ac74-fa20fc2e5664", 00:08:36.370 "is_configured": true, 00:08:36.370 "data_offset": 2048, 00:08:36.370 "data_size": 63488 00:08:36.370 } 00:08:36.370 ] 00:08:36.370 }' 00:08:36.370 03:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.370 03:11:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.630 03:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:36.630 03:11:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:36.889 [2024-10-09 03:11:19.980749] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:37.829 03:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:37.829 03:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.829 03:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.829 03:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.829 03:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:37.829 03:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:08:37.829 03:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:37.829 03:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:37.829 03:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:37.829 03:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:37.829 03:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:08:37.829 03:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.829 03:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:37.829 03:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.829 03:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.829 03:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.829 03:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.829 03:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.829 03:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:37.829 03:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:37.829 03:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.829 03:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:37.829 03:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.829 "name": "raid_bdev1", 00:08:37.829 "uuid": "4bb88730-1039-4cc9-ac19-b51c3bc1cdb2", 00:08:37.829 "strip_size_kb": 64, 00:08:37.829 "state": "online", 00:08:37.829 "raid_level": "concat", 00:08:37.829 "superblock": true, 00:08:37.829 "num_base_bdevs": 2, 00:08:37.829 "num_base_bdevs_discovered": 2, 00:08:37.829 "num_base_bdevs_operational": 2, 00:08:37.829 "base_bdevs_list": [ 00:08:37.829 { 00:08:37.829 "name": "BaseBdev1", 00:08:37.829 "uuid": "f864b171-49a3-5d35-a5c3-4928d69216d3", 00:08:37.829 "is_configured": true, 00:08:37.829 "data_offset": 2048, 00:08:37.829 "data_size": 63488 00:08:37.829 }, 00:08:37.829 { 00:08:37.829 "name": "BaseBdev2", 00:08:37.829 "uuid": "9993dd57-19cf-581b-ac74-fa20fc2e5664", 00:08:37.829 "is_configured": true, 00:08:37.829 "data_offset": 2048, 00:08:37.829 "data_size": 63488 00:08:37.829 } 00:08:37.829 ] 00:08:37.829 }' 00:08:37.829 03:11:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.829 03:11:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.089 03:11:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:38.089 03:11:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.089 03:11:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.089 [2024-10-09 03:11:21.380912] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:38.089 [2024-10-09 03:11:21.381065] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:38.089 [2024-10-09 03:11:21.383666] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:38.089 [2024-10-09 03:11:21.383774] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:38.089 [2024-10-09 03:11:21.383830] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:38.089 [2024-10-09 03:11:21.383895] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:38.089 03:11:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.089 03:11:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62662 00:08:38.090 03:11:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 62662 ']' 00:08:38.090 03:11:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 62662 00:08:38.090 { 00:08:38.090 "results": [ 00:08:38.090 { 00:08:38.090 "job": "raid_bdev1", 00:08:38.090 "core_mask": "0x1", 00:08:38.090 "workload": "randrw", 00:08:38.090 "percentage": 50, 00:08:38.090 "status": "finished", 00:08:38.090 "queue_depth": 1, 00:08:38.090 "io_size": 131072, 00:08:38.090 "runtime": 1.400999, 00:08:38.090 "iops": 14805.863530238066, 00:08:38.090 "mibps": 1850.7329412797583, 00:08:38.090 "io_failed": 1, 00:08:38.090 "io_timeout": 0, 00:08:38.090 "avg_latency_us": 94.61055007014181, 00:08:38.090 "min_latency_us": 25.9353711790393, 00:08:38.090 "max_latency_us": 1373.6803493449781 00:08:38.090 } 00:08:38.090 ], 00:08:38.090 "core_count": 1 00:08:38.090 } 00:08:38.090 03:11:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:08:38.349 03:11:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:38.349 03:11:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62662 00:08:38.349 killing process with pid 62662 00:08:38.349 03:11:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:38.349 03:11:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:38.349 03:11:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62662' 00:08:38.349 03:11:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 62662 00:08:38.349 [2024-10-09 03:11:21.427135] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:38.349 03:11:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 62662 00:08:38.349 [2024-10-09 03:11:21.583650] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:39.752 03:11:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.1S5t23iic4 00:08:39.752 03:11:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:39.752 03:11:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:39.752 03:11:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:08:39.752 03:11:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:08:39.752 03:11:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:39.752 03:11:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:39.752 ************************************ 00:08:39.752 END TEST raid_write_error_test 00:08:39.752 ************************************ 00:08:39.752 03:11:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:08:39.752 00:08:39.752 real 0m4.630s 00:08:39.752 user 0m5.380s 00:08:39.752 sys 0m0.638s 00:08:39.752 03:11:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:39.752 03:11:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.012 03:11:23 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:40.012 03:11:23 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:40.012 03:11:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:40.012 03:11:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:40.012 03:11:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:40.012 ************************************ 00:08:40.012 START TEST raid_state_function_test 00:08:40.012 ************************************ 00:08:40.012 03:11:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 false 00:08:40.012 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:40.012 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:40.012 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:40.012 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:40.012 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:40.012 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:40.012 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:40.012 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:40.012 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:40.012 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:40.012 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:40.012 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:40.012 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:40.012 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:40.012 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:40.012 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:40.012 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:40.012 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:40.012 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:40.012 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:40.012 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:40.012 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:40.012 Process raid pid: 62806 00:08:40.012 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62806 00:08:40.012 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:40.012 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62806' 00:08:40.012 03:11:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62806 00:08:40.012 03:11:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 62806 ']' 00:08:40.012 03:11:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.012 03:11:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:40.012 03:11:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.012 03:11:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:40.012 03:11:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.012 [2024-10-09 03:11:23.186845] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:08:40.012 [2024-10-09 03:11:23.186971] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:40.272 [2024-10-09 03:11:23.351885] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:40.532 [2024-10-09 03:11:23.603539] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:40.791 [2024-10-09 03:11:23.841982] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:40.791 [2024-10-09 03:11:23.842092] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:40.791 03:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:40.791 03:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:08:40.791 03:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:40.791 03:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.791 03:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.791 [2024-10-09 03:11:24.007514] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:40.791 [2024-10-09 03:11:24.007575] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:40.792 [2024-10-09 03:11:24.007588] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:40.792 [2024-10-09 03:11:24.007599] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:40.792 03:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.792 03:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:40.792 03:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:40.792 03:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:40.792 03:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:40.792 03:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:40.792 03:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:40.792 03:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:40.792 03:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:40.792 03:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:40.792 03:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:40.792 03:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:40.792 03:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:40.792 03:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:40.792 03:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.792 03:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:40.792 03:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:40.792 "name": "Existed_Raid", 00:08:40.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.792 "strip_size_kb": 0, 00:08:40.792 "state": "configuring", 00:08:40.792 "raid_level": "raid1", 00:08:40.792 "superblock": false, 00:08:40.792 "num_base_bdevs": 2, 00:08:40.792 "num_base_bdevs_discovered": 0, 00:08:40.792 "num_base_bdevs_operational": 2, 00:08:40.792 "base_bdevs_list": [ 00:08:40.792 { 00:08:40.792 "name": "BaseBdev1", 00:08:40.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.792 "is_configured": false, 00:08:40.792 "data_offset": 0, 00:08:40.792 "data_size": 0 00:08:40.792 }, 00:08:40.792 { 00:08:40.792 "name": "BaseBdev2", 00:08:40.792 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:40.792 "is_configured": false, 00:08:40.792 "data_offset": 0, 00:08:40.792 "data_size": 0 00:08:40.792 } 00:08:40.792 ] 00:08:40.792 }' 00:08:40.792 03:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:40.792 03:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.360 03:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:41.360 03:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.360 03:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.360 [2024-10-09 03:11:24.498574] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:41.360 [2024-10-09 03:11:24.498687] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:41.360 03:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.360 03:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:41.360 03:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.360 03:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.360 [2024-10-09 03:11:24.506606] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:41.360 [2024-10-09 03:11:24.506695] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:41.360 [2024-10-09 03:11:24.506726] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:41.360 [2024-10-09 03:11:24.506757] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:41.360 03:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.360 03:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:41.360 03:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.360 03:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.360 [2024-10-09 03:11:24.569909] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:41.360 BaseBdev1 00:08:41.360 03:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.360 03:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:41.360 03:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:41.360 03:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:41.360 03:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:41.360 03:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:41.360 03:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:41.360 03:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:41.360 03:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.360 03:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.360 03:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.360 03:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:41.360 03:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.360 03:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.360 [ 00:08:41.360 { 00:08:41.360 "name": "BaseBdev1", 00:08:41.360 "aliases": [ 00:08:41.360 "f7a78c3f-bc6c-4d68-895f-972386f1e033" 00:08:41.360 ], 00:08:41.360 "product_name": "Malloc disk", 00:08:41.360 "block_size": 512, 00:08:41.360 "num_blocks": 65536, 00:08:41.360 "uuid": "f7a78c3f-bc6c-4d68-895f-972386f1e033", 00:08:41.360 "assigned_rate_limits": { 00:08:41.360 "rw_ios_per_sec": 0, 00:08:41.360 "rw_mbytes_per_sec": 0, 00:08:41.360 "r_mbytes_per_sec": 0, 00:08:41.360 "w_mbytes_per_sec": 0 00:08:41.360 }, 00:08:41.360 "claimed": true, 00:08:41.360 "claim_type": "exclusive_write", 00:08:41.360 "zoned": false, 00:08:41.360 "supported_io_types": { 00:08:41.360 "read": true, 00:08:41.360 "write": true, 00:08:41.360 "unmap": true, 00:08:41.360 "flush": true, 00:08:41.360 "reset": true, 00:08:41.360 "nvme_admin": false, 00:08:41.360 "nvme_io": false, 00:08:41.360 "nvme_io_md": false, 00:08:41.360 "write_zeroes": true, 00:08:41.360 "zcopy": true, 00:08:41.360 "get_zone_info": false, 00:08:41.360 "zone_management": false, 00:08:41.360 "zone_append": false, 00:08:41.360 "compare": false, 00:08:41.360 "compare_and_write": false, 00:08:41.360 "abort": true, 00:08:41.360 "seek_hole": false, 00:08:41.360 "seek_data": false, 00:08:41.360 "copy": true, 00:08:41.360 "nvme_iov_md": false 00:08:41.360 }, 00:08:41.360 "memory_domains": [ 00:08:41.360 { 00:08:41.360 "dma_device_id": "system", 00:08:41.360 "dma_device_type": 1 00:08:41.360 }, 00:08:41.360 { 00:08:41.360 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:41.360 "dma_device_type": 2 00:08:41.360 } 00:08:41.360 ], 00:08:41.360 "driver_specific": {} 00:08:41.360 } 00:08:41.360 ] 00:08:41.360 03:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.360 03:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:41.360 03:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:41.360 03:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.360 03:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.360 03:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:41.360 03:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:41.360 03:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:41.360 03:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.361 03:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.361 03:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.361 03:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.361 03:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.361 03:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.361 03:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.361 03:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.361 03:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.361 03:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.361 "name": "Existed_Raid", 00:08:41.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.361 "strip_size_kb": 0, 00:08:41.361 "state": "configuring", 00:08:41.361 "raid_level": "raid1", 00:08:41.361 "superblock": false, 00:08:41.361 "num_base_bdevs": 2, 00:08:41.361 "num_base_bdevs_discovered": 1, 00:08:41.361 "num_base_bdevs_operational": 2, 00:08:41.361 "base_bdevs_list": [ 00:08:41.361 { 00:08:41.361 "name": "BaseBdev1", 00:08:41.361 "uuid": "f7a78c3f-bc6c-4d68-895f-972386f1e033", 00:08:41.361 "is_configured": true, 00:08:41.361 "data_offset": 0, 00:08:41.361 "data_size": 65536 00:08:41.361 }, 00:08:41.361 { 00:08:41.361 "name": "BaseBdev2", 00:08:41.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.361 "is_configured": false, 00:08:41.361 "data_offset": 0, 00:08:41.361 "data_size": 0 00:08:41.361 } 00:08:41.361 ] 00:08:41.361 }' 00:08:41.361 03:11:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.361 03:11:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.929 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:41.929 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.929 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.929 [2024-10-09 03:11:25.033104] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:41.929 [2024-10-09 03:11:25.033193] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:41.929 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.929 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:41.929 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.929 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.929 [2024-10-09 03:11:25.045139] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:41.929 [2024-10-09 03:11:25.047247] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:41.929 [2024-10-09 03:11:25.047333] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:41.929 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.929 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:41.929 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:41.929 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:41.929 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:41.929 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:41.929 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:41.929 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:41.929 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:41.929 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:41.929 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:41.929 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:41.929 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:41.929 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:41.929 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.929 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:41.929 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:41.929 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:41.929 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:41.929 "name": "Existed_Raid", 00:08:41.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.929 "strip_size_kb": 0, 00:08:41.929 "state": "configuring", 00:08:41.929 "raid_level": "raid1", 00:08:41.929 "superblock": false, 00:08:41.929 "num_base_bdevs": 2, 00:08:41.929 "num_base_bdevs_discovered": 1, 00:08:41.929 "num_base_bdevs_operational": 2, 00:08:41.929 "base_bdevs_list": [ 00:08:41.929 { 00:08:41.929 "name": "BaseBdev1", 00:08:41.929 "uuid": "f7a78c3f-bc6c-4d68-895f-972386f1e033", 00:08:41.930 "is_configured": true, 00:08:41.930 "data_offset": 0, 00:08:41.930 "data_size": 65536 00:08:41.930 }, 00:08:41.930 { 00:08:41.930 "name": "BaseBdev2", 00:08:41.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.930 "is_configured": false, 00:08:41.930 "data_offset": 0, 00:08:41.930 "data_size": 0 00:08:41.930 } 00:08:41.930 ] 00:08:41.930 }' 00:08:41.930 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:41.930 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.189 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:42.190 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.190 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.449 [2024-10-09 03:11:25.501060] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:42.449 [2024-10-09 03:11:25.501117] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:42.449 [2024-10-09 03:11:25.501125] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:42.449 [2024-10-09 03:11:25.501420] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:42.449 [2024-10-09 03:11:25.501594] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:42.449 [2024-10-09 03:11:25.501610] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:42.449 [2024-10-09 03:11:25.501924] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:42.449 BaseBdev2 00:08:42.449 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.449 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:42.449 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:42.449 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:42.449 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:08:42.449 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:42.449 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:42.449 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:42.449 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.449 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.449 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.449 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:42.449 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.449 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.449 [ 00:08:42.449 { 00:08:42.449 "name": "BaseBdev2", 00:08:42.449 "aliases": [ 00:08:42.449 "28076512-ed6d-4cdc-8ddd-c778f292402e" 00:08:42.449 ], 00:08:42.449 "product_name": "Malloc disk", 00:08:42.449 "block_size": 512, 00:08:42.449 "num_blocks": 65536, 00:08:42.449 "uuid": "28076512-ed6d-4cdc-8ddd-c778f292402e", 00:08:42.449 "assigned_rate_limits": { 00:08:42.449 "rw_ios_per_sec": 0, 00:08:42.449 "rw_mbytes_per_sec": 0, 00:08:42.449 "r_mbytes_per_sec": 0, 00:08:42.449 "w_mbytes_per_sec": 0 00:08:42.449 }, 00:08:42.449 "claimed": true, 00:08:42.449 "claim_type": "exclusive_write", 00:08:42.449 "zoned": false, 00:08:42.449 "supported_io_types": { 00:08:42.449 "read": true, 00:08:42.449 "write": true, 00:08:42.449 "unmap": true, 00:08:42.449 "flush": true, 00:08:42.449 "reset": true, 00:08:42.449 "nvme_admin": false, 00:08:42.449 "nvme_io": false, 00:08:42.449 "nvme_io_md": false, 00:08:42.449 "write_zeroes": true, 00:08:42.449 "zcopy": true, 00:08:42.449 "get_zone_info": false, 00:08:42.449 "zone_management": false, 00:08:42.449 "zone_append": false, 00:08:42.449 "compare": false, 00:08:42.449 "compare_and_write": false, 00:08:42.449 "abort": true, 00:08:42.449 "seek_hole": false, 00:08:42.449 "seek_data": false, 00:08:42.449 "copy": true, 00:08:42.449 "nvme_iov_md": false 00:08:42.449 }, 00:08:42.449 "memory_domains": [ 00:08:42.450 { 00:08:42.450 "dma_device_id": "system", 00:08:42.450 "dma_device_type": 1 00:08:42.450 }, 00:08:42.450 { 00:08:42.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.450 "dma_device_type": 2 00:08:42.450 } 00:08:42.450 ], 00:08:42.450 "driver_specific": {} 00:08:42.450 } 00:08:42.450 ] 00:08:42.450 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.450 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:08:42.450 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:42.450 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:42.450 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:42.450 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.450 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:42.450 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:42.450 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:42.450 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:42.450 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.450 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.450 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.450 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.450 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.450 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.450 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.450 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.450 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.450 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.450 "name": "Existed_Raid", 00:08:42.450 "uuid": "8be87f15-0ff9-47ae-8293-11a19583fed1", 00:08:42.450 "strip_size_kb": 0, 00:08:42.450 "state": "online", 00:08:42.450 "raid_level": "raid1", 00:08:42.450 "superblock": false, 00:08:42.450 "num_base_bdevs": 2, 00:08:42.450 "num_base_bdevs_discovered": 2, 00:08:42.450 "num_base_bdevs_operational": 2, 00:08:42.450 "base_bdevs_list": [ 00:08:42.450 { 00:08:42.450 "name": "BaseBdev1", 00:08:42.450 "uuid": "f7a78c3f-bc6c-4d68-895f-972386f1e033", 00:08:42.450 "is_configured": true, 00:08:42.450 "data_offset": 0, 00:08:42.450 "data_size": 65536 00:08:42.450 }, 00:08:42.450 { 00:08:42.450 "name": "BaseBdev2", 00:08:42.450 "uuid": "28076512-ed6d-4cdc-8ddd-c778f292402e", 00:08:42.450 "is_configured": true, 00:08:42.450 "data_offset": 0, 00:08:42.450 "data_size": 65536 00:08:42.450 } 00:08:42.450 ] 00:08:42.450 }' 00:08:42.450 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.450 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.709 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:42.709 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:42.709 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:42.709 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:42.709 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:42.709 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:42.709 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:42.709 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:42.709 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.709 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.709 [2024-10-09 03:11:25.956579] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:42.709 03:11:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.709 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:42.709 "name": "Existed_Raid", 00:08:42.709 "aliases": [ 00:08:42.709 "8be87f15-0ff9-47ae-8293-11a19583fed1" 00:08:42.709 ], 00:08:42.709 "product_name": "Raid Volume", 00:08:42.709 "block_size": 512, 00:08:42.709 "num_blocks": 65536, 00:08:42.709 "uuid": "8be87f15-0ff9-47ae-8293-11a19583fed1", 00:08:42.709 "assigned_rate_limits": { 00:08:42.709 "rw_ios_per_sec": 0, 00:08:42.709 "rw_mbytes_per_sec": 0, 00:08:42.709 "r_mbytes_per_sec": 0, 00:08:42.709 "w_mbytes_per_sec": 0 00:08:42.709 }, 00:08:42.709 "claimed": false, 00:08:42.709 "zoned": false, 00:08:42.709 "supported_io_types": { 00:08:42.709 "read": true, 00:08:42.709 "write": true, 00:08:42.709 "unmap": false, 00:08:42.709 "flush": false, 00:08:42.709 "reset": true, 00:08:42.709 "nvme_admin": false, 00:08:42.709 "nvme_io": false, 00:08:42.709 "nvme_io_md": false, 00:08:42.709 "write_zeroes": true, 00:08:42.709 "zcopy": false, 00:08:42.709 "get_zone_info": false, 00:08:42.709 "zone_management": false, 00:08:42.709 "zone_append": false, 00:08:42.709 "compare": false, 00:08:42.709 "compare_and_write": false, 00:08:42.709 "abort": false, 00:08:42.709 "seek_hole": false, 00:08:42.709 "seek_data": false, 00:08:42.709 "copy": false, 00:08:42.709 "nvme_iov_md": false 00:08:42.709 }, 00:08:42.709 "memory_domains": [ 00:08:42.709 { 00:08:42.709 "dma_device_id": "system", 00:08:42.709 "dma_device_type": 1 00:08:42.709 }, 00:08:42.709 { 00:08:42.709 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.709 "dma_device_type": 2 00:08:42.709 }, 00:08:42.709 { 00:08:42.709 "dma_device_id": "system", 00:08:42.709 "dma_device_type": 1 00:08:42.710 }, 00:08:42.710 { 00:08:42.710 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.710 "dma_device_type": 2 00:08:42.710 } 00:08:42.710 ], 00:08:42.710 "driver_specific": { 00:08:42.710 "raid": { 00:08:42.710 "uuid": "8be87f15-0ff9-47ae-8293-11a19583fed1", 00:08:42.710 "strip_size_kb": 0, 00:08:42.710 "state": "online", 00:08:42.710 "raid_level": "raid1", 00:08:42.710 "superblock": false, 00:08:42.710 "num_base_bdevs": 2, 00:08:42.710 "num_base_bdevs_discovered": 2, 00:08:42.710 "num_base_bdevs_operational": 2, 00:08:42.710 "base_bdevs_list": [ 00:08:42.710 { 00:08:42.710 "name": "BaseBdev1", 00:08:42.710 "uuid": "f7a78c3f-bc6c-4d68-895f-972386f1e033", 00:08:42.710 "is_configured": true, 00:08:42.710 "data_offset": 0, 00:08:42.710 "data_size": 65536 00:08:42.710 }, 00:08:42.710 { 00:08:42.710 "name": "BaseBdev2", 00:08:42.710 "uuid": "28076512-ed6d-4cdc-8ddd-c778f292402e", 00:08:42.710 "is_configured": true, 00:08:42.710 "data_offset": 0, 00:08:42.710 "data_size": 65536 00:08:42.710 } 00:08:42.710 ] 00:08:42.710 } 00:08:42.710 } 00:08:42.710 }' 00:08:42.710 03:11:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:42.968 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:42.968 BaseBdev2' 00:08:42.968 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.968 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:42.968 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.968 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:42.968 03:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.968 03:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.968 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.968 03:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.968 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.968 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.968 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:42.968 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:42.968 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:42.968 03:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.968 03:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.968 03:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:42.968 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:42.968 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:42.968 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:42.968 03:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:42.968 03:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:42.968 [2024-10-09 03:11:26.204080] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:43.227 03:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.228 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:43.228 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:43.228 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:43.228 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:43.228 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:43.228 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:43.228 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.228 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:43.228 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:43.228 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:43.228 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:43.228 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.228 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.228 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.228 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.228 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.228 03:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.228 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.228 03:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.228 03:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.228 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.228 "name": "Existed_Raid", 00:08:43.228 "uuid": "8be87f15-0ff9-47ae-8293-11a19583fed1", 00:08:43.228 "strip_size_kb": 0, 00:08:43.228 "state": "online", 00:08:43.228 "raid_level": "raid1", 00:08:43.228 "superblock": false, 00:08:43.228 "num_base_bdevs": 2, 00:08:43.228 "num_base_bdevs_discovered": 1, 00:08:43.228 "num_base_bdevs_operational": 1, 00:08:43.228 "base_bdevs_list": [ 00:08:43.228 { 00:08:43.228 "name": null, 00:08:43.228 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.228 "is_configured": false, 00:08:43.228 "data_offset": 0, 00:08:43.228 "data_size": 65536 00:08:43.228 }, 00:08:43.228 { 00:08:43.228 "name": "BaseBdev2", 00:08:43.228 "uuid": "28076512-ed6d-4cdc-8ddd-c778f292402e", 00:08:43.228 "is_configured": true, 00:08:43.228 "data_offset": 0, 00:08:43.228 "data_size": 65536 00:08:43.228 } 00:08:43.228 ] 00:08:43.228 }' 00:08:43.228 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.228 03:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.488 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:43.488 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:43.488 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.488 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:43.488 03:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.488 03:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.747 03:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.747 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:43.747 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:43.747 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:43.747 03:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.747 03:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.747 [2024-10-09 03:11:26.831522] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:43.747 [2024-10-09 03:11:26.831652] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:43.747 [2024-10-09 03:11:26.936328] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:43.747 [2024-10-09 03:11:26.936388] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:43.747 [2024-10-09 03:11:26.936402] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:43.747 03:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.747 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:43.747 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:43.747 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.747 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:43.747 03:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:43.747 03:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:43.747 03:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:43.747 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:43.747 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:43.747 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:43.747 03:11:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62806 00:08:43.747 03:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 62806 ']' 00:08:43.747 03:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 62806 00:08:43.747 03:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:08:43.747 03:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:43.747 03:11:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 62806 00:08:43.747 killing process with pid 62806 00:08:43.747 03:11:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:43.748 03:11:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:43.748 03:11:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 62806' 00:08:43.748 03:11:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 62806 00:08:43.748 [2024-10-09 03:11:27.030564] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:43.748 03:11:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 62806 00:08:43.748 [2024-10-09 03:11:27.047388] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:45.151 03:11:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:45.151 00:08:45.151 real 0m5.316s 00:08:45.151 user 0m7.424s 00:08:45.151 sys 0m0.926s 00:08:45.151 03:11:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:45.151 ************************************ 00:08:45.151 END TEST raid_state_function_test 00:08:45.151 ************************************ 00:08:45.151 03:11:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:45.411 03:11:28 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:45.411 03:11:28 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:45.411 03:11:28 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:45.411 03:11:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:45.411 ************************************ 00:08:45.411 START TEST raid_state_function_test_sb 00:08:45.411 ************************************ 00:08:45.411 03:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:08:45.411 03:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:08:45.411 03:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:45.411 03:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:45.411 03:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:45.411 03:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:45.411 03:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:45.411 03:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:45.411 03:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:45.411 03:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:45.411 03:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:45.411 03:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:45.411 03:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:45.411 03:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:45.411 03:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:45.411 03:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:45.411 03:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:45.411 03:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:45.411 03:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:45.412 03:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:08:45.412 03:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:08:45.412 03:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:45.412 03:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:45.412 03:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=63059 00:08:45.412 03:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:45.412 03:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63059' 00:08:45.412 Process raid pid: 63059 00:08:45.412 03:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 63059 00:08:45.412 03:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 63059 ']' 00:08:45.412 03:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:45.412 03:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:45.412 03:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:45.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:45.412 03:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:45.412 03:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.412 [2024-10-09 03:11:28.573360] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:08:45.412 [2024-10-09 03:11:28.573538] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:45.672 [2024-10-09 03:11:28.720906] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:45.672 [2024-10-09 03:11:28.973724] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:45.931 [2024-10-09 03:11:29.207293] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:45.931 [2024-10-09 03:11:29.207408] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:46.191 03:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:46.191 03:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:08:46.191 03:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:46.191 03:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.191 03:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.191 [2024-10-09 03:11:29.393993] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:46.191 [2024-10-09 03:11:29.394150] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:46.191 [2024-10-09 03:11:29.394184] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:46.191 [2024-10-09 03:11:29.394210] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:46.191 03:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.191 03:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:46.191 03:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.191 03:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.191 03:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:46.191 03:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:46.191 03:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:46.191 03:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.191 03:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.191 03:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.191 03:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.191 03:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.191 03:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.191 03:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.191 03:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.191 03:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.191 03:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.191 "name": "Existed_Raid", 00:08:46.191 "uuid": "ed72b683-bc6d-4929-8d38-645f645e6c03", 00:08:46.191 "strip_size_kb": 0, 00:08:46.191 "state": "configuring", 00:08:46.191 "raid_level": "raid1", 00:08:46.191 "superblock": true, 00:08:46.191 "num_base_bdevs": 2, 00:08:46.191 "num_base_bdevs_discovered": 0, 00:08:46.191 "num_base_bdevs_operational": 2, 00:08:46.191 "base_bdevs_list": [ 00:08:46.191 { 00:08:46.191 "name": "BaseBdev1", 00:08:46.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.191 "is_configured": false, 00:08:46.191 "data_offset": 0, 00:08:46.191 "data_size": 0 00:08:46.191 }, 00:08:46.191 { 00:08:46.191 "name": "BaseBdev2", 00:08:46.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.191 "is_configured": false, 00:08:46.191 "data_offset": 0, 00:08:46.191 "data_size": 0 00:08:46.191 } 00:08:46.191 ] 00:08:46.191 }' 00:08:46.191 03:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.191 03:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.761 03:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:46.761 03:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.761 03:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.761 [2024-10-09 03:11:29.821120] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:46.761 [2024-10-09 03:11:29.821225] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:46.761 03:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.761 03:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:46.761 03:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.761 03:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.761 [2024-10-09 03:11:29.833150] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:46.761 [2024-10-09 03:11:29.833228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:46.761 [2024-10-09 03:11:29.833255] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:46.761 [2024-10-09 03:11:29.833281] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:46.761 03:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.761 03:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:46.761 03:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.761 03:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.762 [2024-10-09 03:11:29.898940] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:46.762 BaseBdev1 00:08:46.762 03:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.762 03:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:46.762 03:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:08:46.762 03:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:46.762 03:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:46.762 03:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:46.762 03:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:46.762 03:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:46.762 03:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.762 03:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.762 03:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.762 03:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:46.762 03:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.762 03:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.762 [ 00:08:46.762 { 00:08:46.762 "name": "BaseBdev1", 00:08:46.762 "aliases": [ 00:08:46.762 "7de0259b-4ca2-471b-a200-91b9405f7043" 00:08:46.762 ], 00:08:46.762 "product_name": "Malloc disk", 00:08:46.762 "block_size": 512, 00:08:46.762 "num_blocks": 65536, 00:08:46.762 "uuid": "7de0259b-4ca2-471b-a200-91b9405f7043", 00:08:46.762 "assigned_rate_limits": { 00:08:46.762 "rw_ios_per_sec": 0, 00:08:46.762 "rw_mbytes_per_sec": 0, 00:08:46.762 "r_mbytes_per_sec": 0, 00:08:46.762 "w_mbytes_per_sec": 0 00:08:46.762 }, 00:08:46.762 "claimed": true, 00:08:46.762 "claim_type": "exclusive_write", 00:08:46.762 "zoned": false, 00:08:46.762 "supported_io_types": { 00:08:46.762 "read": true, 00:08:46.762 "write": true, 00:08:46.762 "unmap": true, 00:08:46.762 "flush": true, 00:08:46.762 "reset": true, 00:08:46.762 "nvme_admin": false, 00:08:46.762 "nvme_io": false, 00:08:46.762 "nvme_io_md": false, 00:08:46.762 "write_zeroes": true, 00:08:46.762 "zcopy": true, 00:08:46.762 "get_zone_info": false, 00:08:46.762 "zone_management": false, 00:08:46.762 "zone_append": false, 00:08:46.762 "compare": false, 00:08:46.762 "compare_and_write": false, 00:08:46.762 "abort": true, 00:08:46.762 "seek_hole": false, 00:08:46.762 "seek_data": false, 00:08:46.762 "copy": true, 00:08:46.762 "nvme_iov_md": false 00:08:46.762 }, 00:08:46.762 "memory_domains": [ 00:08:46.762 { 00:08:46.762 "dma_device_id": "system", 00:08:46.762 "dma_device_type": 1 00:08:46.762 }, 00:08:46.762 { 00:08:46.762 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:46.762 "dma_device_type": 2 00:08:46.762 } 00:08:46.762 ], 00:08:46.762 "driver_specific": {} 00:08:46.762 } 00:08:46.762 ] 00:08:46.762 03:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.762 03:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:46.762 03:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:46.762 03:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:46.762 03:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:46.762 03:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:46.762 03:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:46.762 03:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:46.762 03:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:46.762 03:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:46.762 03:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:46.762 03:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:46.762 03:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:46.762 03:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:46.762 03:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:46.762 03:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.762 03:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:46.762 03:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:46.762 "name": "Existed_Raid", 00:08:46.762 "uuid": "743a95bc-b146-4e2b-96fa-ebcc8d90002b", 00:08:46.762 "strip_size_kb": 0, 00:08:46.762 "state": "configuring", 00:08:46.762 "raid_level": "raid1", 00:08:46.762 "superblock": true, 00:08:46.762 "num_base_bdevs": 2, 00:08:46.762 "num_base_bdevs_discovered": 1, 00:08:46.762 "num_base_bdevs_operational": 2, 00:08:46.762 "base_bdevs_list": [ 00:08:46.762 { 00:08:46.762 "name": "BaseBdev1", 00:08:46.762 "uuid": "7de0259b-4ca2-471b-a200-91b9405f7043", 00:08:46.762 "is_configured": true, 00:08:46.762 "data_offset": 2048, 00:08:46.762 "data_size": 63488 00:08:46.762 }, 00:08:46.762 { 00:08:46.762 "name": "BaseBdev2", 00:08:46.762 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:46.762 "is_configured": false, 00:08:46.762 "data_offset": 0, 00:08:46.762 "data_size": 0 00:08:46.762 } 00:08:46.762 ] 00:08:46.762 }' 00:08:46.762 03:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:46.762 03:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.331 03:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:47.331 03:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.331 03:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.331 [2024-10-09 03:11:30.366235] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:47.331 [2024-10-09 03:11:30.366328] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:47.331 03:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.331 03:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:47.331 03:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.331 03:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.331 [2024-10-09 03:11:30.378286] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:47.331 [2024-10-09 03:11:30.380321] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:47.331 [2024-10-09 03:11:30.380367] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:47.331 03:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.331 03:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:47.331 03:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:47.331 03:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:47.331 03:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.331 03:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:47.331 03:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:47.331 03:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:47.331 03:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:47.331 03:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.331 03:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.331 03:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.331 03:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.331 03:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.331 03:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.331 03:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.331 03:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.331 03:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.331 03:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.331 "name": "Existed_Raid", 00:08:47.331 "uuid": "36e37816-e167-4e81-9aa8-72438285340b", 00:08:47.331 "strip_size_kb": 0, 00:08:47.331 "state": "configuring", 00:08:47.331 "raid_level": "raid1", 00:08:47.331 "superblock": true, 00:08:47.331 "num_base_bdevs": 2, 00:08:47.331 "num_base_bdevs_discovered": 1, 00:08:47.331 "num_base_bdevs_operational": 2, 00:08:47.331 "base_bdevs_list": [ 00:08:47.331 { 00:08:47.331 "name": "BaseBdev1", 00:08:47.331 "uuid": "7de0259b-4ca2-471b-a200-91b9405f7043", 00:08:47.331 "is_configured": true, 00:08:47.331 "data_offset": 2048, 00:08:47.331 "data_size": 63488 00:08:47.331 }, 00:08:47.331 { 00:08:47.331 "name": "BaseBdev2", 00:08:47.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:47.331 "is_configured": false, 00:08:47.331 "data_offset": 0, 00:08:47.331 "data_size": 0 00:08:47.331 } 00:08:47.331 ] 00:08:47.331 }' 00:08:47.331 03:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.331 03:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.590 03:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:47.590 03:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.590 03:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.590 [2024-10-09 03:11:30.832506] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:47.590 [2024-10-09 03:11:30.832901] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:47.590 [2024-10-09 03:11:30.832965] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:47.590 [2024-10-09 03:11:30.833287] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:47.590 BaseBdev2 00:08:47.590 [2024-10-09 03:11:30.833484] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:47.590 [2024-10-09 03:11:30.833500] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:47.590 [2024-10-09 03:11:30.833649] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:47.590 03:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.590 03:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:47.590 03:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:08:47.590 03:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:08:47.590 03:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:08:47.590 03:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:08:47.590 03:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:08:47.590 03:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:08:47.590 03:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.590 03:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.590 03:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.591 03:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:47.591 03:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.591 03:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.591 [ 00:08:47.591 { 00:08:47.591 "name": "BaseBdev2", 00:08:47.591 "aliases": [ 00:08:47.591 "e09ba984-8b12-4216-adc0-dea061768d50" 00:08:47.591 ], 00:08:47.591 "product_name": "Malloc disk", 00:08:47.591 "block_size": 512, 00:08:47.591 "num_blocks": 65536, 00:08:47.591 "uuid": "e09ba984-8b12-4216-adc0-dea061768d50", 00:08:47.591 "assigned_rate_limits": { 00:08:47.591 "rw_ios_per_sec": 0, 00:08:47.591 "rw_mbytes_per_sec": 0, 00:08:47.591 "r_mbytes_per_sec": 0, 00:08:47.591 "w_mbytes_per_sec": 0 00:08:47.591 }, 00:08:47.591 "claimed": true, 00:08:47.591 "claim_type": "exclusive_write", 00:08:47.591 "zoned": false, 00:08:47.591 "supported_io_types": { 00:08:47.591 "read": true, 00:08:47.591 "write": true, 00:08:47.591 "unmap": true, 00:08:47.591 "flush": true, 00:08:47.591 "reset": true, 00:08:47.591 "nvme_admin": false, 00:08:47.591 "nvme_io": false, 00:08:47.591 "nvme_io_md": false, 00:08:47.591 "write_zeroes": true, 00:08:47.591 "zcopy": true, 00:08:47.591 "get_zone_info": false, 00:08:47.591 "zone_management": false, 00:08:47.591 "zone_append": false, 00:08:47.591 "compare": false, 00:08:47.591 "compare_and_write": false, 00:08:47.591 "abort": true, 00:08:47.591 "seek_hole": false, 00:08:47.591 "seek_data": false, 00:08:47.591 "copy": true, 00:08:47.591 "nvme_iov_md": false 00:08:47.591 }, 00:08:47.591 "memory_domains": [ 00:08:47.591 { 00:08:47.591 "dma_device_id": "system", 00:08:47.591 "dma_device_type": 1 00:08:47.591 }, 00:08:47.591 { 00:08:47.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:47.591 "dma_device_type": 2 00:08:47.591 } 00:08:47.591 ], 00:08:47.591 "driver_specific": {} 00:08:47.591 } 00:08:47.591 ] 00:08:47.591 03:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.591 03:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:08:47.591 03:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:47.591 03:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:47.591 03:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:47.591 03:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:47.591 03:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:47.591 03:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:47.591 03:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:47.591 03:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:47.591 03:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.591 03:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.591 03:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.591 03:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.591 03:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:47.591 03:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.591 03:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:47.591 03:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:47.849 03:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:47.849 03:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.849 "name": "Existed_Raid", 00:08:47.849 "uuid": "36e37816-e167-4e81-9aa8-72438285340b", 00:08:47.849 "strip_size_kb": 0, 00:08:47.849 "state": "online", 00:08:47.849 "raid_level": "raid1", 00:08:47.849 "superblock": true, 00:08:47.849 "num_base_bdevs": 2, 00:08:47.849 "num_base_bdevs_discovered": 2, 00:08:47.849 "num_base_bdevs_operational": 2, 00:08:47.849 "base_bdevs_list": [ 00:08:47.849 { 00:08:47.849 "name": "BaseBdev1", 00:08:47.849 "uuid": "7de0259b-4ca2-471b-a200-91b9405f7043", 00:08:47.849 "is_configured": true, 00:08:47.849 "data_offset": 2048, 00:08:47.849 "data_size": 63488 00:08:47.849 }, 00:08:47.849 { 00:08:47.849 "name": "BaseBdev2", 00:08:47.849 "uuid": "e09ba984-8b12-4216-adc0-dea061768d50", 00:08:47.849 "is_configured": true, 00:08:47.849 "data_offset": 2048, 00:08:47.849 "data_size": 63488 00:08:47.849 } 00:08:47.850 ] 00:08:47.850 }' 00:08:47.850 03:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.850 03:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.109 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:48.109 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:48.109 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:48.109 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:48.109 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:48.109 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:48.109 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:48.109 03:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.109 03:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.109 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:48.109 [2024-10-09 03:11:31.311995] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:48.109 03:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.109 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:48.109 "name": "Existed_Raid", 00:08:48.109 "aliases": [ 00:08:48.109 "36e37816-e167-4e81-9aa8-72438285340b" 00:08:48.109 ], 00:08:48.109 "product_name": "Raid Volume", 00:08:48.109 "block_size": 512, 00:08:48.109 "num_blocks": 63488, 00:08:48.109 "uuid": "36e37816-e167-4e81-9aa8-72438285340b", 00:08:48.109 "assigned_rate_limits": { 00:08:48.109 "rw_ios_per_sec": 0, 00:08:48.109 "rw_mbytes_per_sec": 0, 00:08:48.109 "r_mbytes_per_sec": 0, 00:08:48.109 "w_mbytes_per_sec": 0 00:08:48.109 }, 00:08:48.109 "claimed": false, 00:08:48.109 "zoned": false, 00:08:48.109 "supported_io_types": { 00:08:48.109 "read": true, 00:08:48.109 "write": true, 00:08:48.109 "unmap": false, 00:08:48.109 "flush": false, 00:08:48.109 "reset": true, 00:08:48.109 "nvme_admin": false, 00:08:48.109 "nvme_io": false, 00:08:48.109 "nvme_io_md": false, 00:08:48.109 "write_zeroes": true, 00:08:48.109 "zcopy": false, 00:08:48.109 "get_zone_info": false, 00:08:48.109 "zone_management": false, 00:08:48.109 "zone_append": false, 00:08:48.109 "compare": false, 00:08:48.109 "compare_and_write": false, 00:08:48.109 "abort": false, 00:08:48.109 "seek_hole": false, 00:08:48.109 "seek_data": false, 00:08:48.109 "copy": false, 00:08:48.109 "nvme_iov_md": false 00:08:48.109 }, 00:08:48.109 "memory_domains": [ 00:08:48.109 { 00:08:48.109 "dma_device_id": "system", 00:08:48.109 "dma_device_type": 1 00:08:48.109 }, 00:08:48.109 { 00:08:48.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.109 "dma_device_type": 2 00:08:48.110 }, 00:08:48.110 { 00:08:48.110 "dma_device_id": "system", 00:08:48.110 "dma_device_type": 1 00:08:48.110 }, 00:08:48.110 { 00:08:48.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.110 "dma_device_type": 2 00:08:48.110 } 00:08:48.110 ], 00:08:48.110 "driver_specific": { 00:08:48.110 "raid": { 00:08:48.110 "uuid": "36e37816-e167-4e81-9aa8-72438285340b", 00:08:48.110 "strip_size_kb": 0, 00:08:48.110 "state": "online", 00:08:48.110 "raid_level": "raid1", 00:08:48.110 "superblock": true, 00:08:48.110 "num_base_bdevs": 2, 00:08:48.110 "num_base_bdevs_discovered": 2, 00:08:48.110 "num_base_bdevs_operational": 2, 00:08:48.110 "base_bdevs_list": [ 00:08:48.110 { 00:08:48.110 "name": "BaseBdev1", 00:08:48.110 "uuid": "7de0259b-4ca2-471b-a200-91b9405f7043", 00:08:48.110 "is_configured": true, 00:08:48.110 "data_offset": 2048, 00:08:48.110 "data_size": 63488 00:08:48.110 }, 00:08:48.110 { 00:08:48.110 "name": "BaseBdev2", 00:08:48.110 "uuid": "e09ba984-8b12-4216-adc0-dea061768d50", 00:08:48.110 "is_configured": true, 00:08:48.110 "data_offset": 2048, 00:08:48.110 "data_size": 63488 00:08:48.110 } 00:08:48.110 ] 00:08:48.110 } 00:08:48.110 } 00:08:48.110 }' 00:08:48.110 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:48.110 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:48.110 BaseBdev2' 00:08:48.110 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.369 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:48.369 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:48.369 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.369 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:48.369 03:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.369 03:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.369 03:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.369 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:48.369 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:48.369 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:48.369 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:48.369 03:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.369 03:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.369 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.369 03:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.369 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:48.369 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:48.369 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:48.369 03:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.369 03:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.369 [2024-10-09 03:11:31.559342] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:48.369 03:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.369 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:48.369 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:08:48.369 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:48.369 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:08:48.369 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:08:48.369 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:48.369 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:48.369 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:48.369 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:48.369 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:48.369 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:48.369 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.369 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.369 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.369 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.628 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.628 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:48.628 03:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.628 03:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.628 03:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.628 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.628 "name": "Existed_Raid", 00:08:48.628 "uuid": "36e37816-e167-4e81-9aa8-72438285340b", 00:08:48.628 "strip_size_kb": 0, 00:08:48.628 "state": "online", 00:08:48.628 "raid_level": "raid1", 00:08:48.628 "superblock": true, 00:08:48.628 "num_base_bdevs": 2, 00:08:48.628 "num_base_bdevs_discovered": 1, 00:08:48.628 "num_base_bdevs_operational": 1, 00:08:48.628 "base_bdevs_list": [ 00:08:48.628 { 00:08:48.628 "name": null, 00:08:48.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:48.628 "is_configured": false, 00:08:48.628 "data_offset": 0, 00:08:48.628 "data_size": 63488 00:08:48.628 }, 00:08:48.628 { 00:08:48.628 "name": "BaseBdev2", 00:08:48.628 "uuid": "e09ba984-8b12-4216-adc0-dea061768d50", 00:08:48.628 "is_configured": true, 00:08:48.628 "data_offset": 2048, 00:08:48.628 "data_size": 63488 00:08:48.628 } 00:08:48.628 ] 00:08:48.628 }' 00:08:48.628 03:11:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.628 03:11:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.888 03:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:48.888 03:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:48.888 03:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:48.888 03:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.888 03:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.888 03:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.888 03:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:48.888 03:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:48.888 03:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:48.888 03:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:48.888 03:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:48.888 03:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:48.888 [2024-10-09 03:11:32.125140] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:48.888 [2024-10-09 03:11:32.125348] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:49.147 [2024-10-09 03:11:32.232243] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:49.147 [2024-10-09 03:11:32.232303] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:49.147 [2024-10-09 03:11:32.232318] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:49.147 03:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.147 03:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:49.147 03:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:49.147 03:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.147 03:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:49.147 03:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:49.147 03:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:49.147 03:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:49.147 03:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:49.147 03:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:49.147 03:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:49.147 03:11:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 63059 00:08:49.147 03:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 63059 ']' 00:08:49.147 03:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 63059 00:08:49.147 03:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:08:49.147 03:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:49.147 03:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63059 00:08:49.147 killing process with pid 63059 00:08:49.147 03:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:49.147 03:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:49.147 03:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63059' 00:08:49.147 03:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 63059 00:08:49.147 [2024-10-09 03:11:32.329044] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:49.147 03:11:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 63059 00:08:49.147 [2024-10-09 03:11:32.347308] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:50.524 03:11:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:50.524 00:08:50.524 real 0m5.265s 00:08:50.524 user 0m7.255s 00:08:50.525 sys 0m0.943s 00:08:50.525 03:11:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:50.525 ************************************ 00:08:50.525 END TEST raid_state_function_test_sb 00:08:50.525 ************************************ 00:08:50.525 03:11:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:50.525 03:11:33 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:50.525 03:11:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:08:50.525 03:11:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:50.525 03:11:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:50.525 ************************************ 00:08:50.525 START TEST raid_superblock_test 00:08:50.525 ************************************ 00:08:50.525 03:11:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:08:50.525 03:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:08:50.525 03:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:50.525 03:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:50.525 03:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:50.525 03:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:50.525 03:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:50.525 03:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:50.525 03:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:50.525 03:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:50.525 03:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:50.525 03:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:50.525 03:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:50.525 03:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:50.525 03:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:08:50.525 03:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:08:50.525 03:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63311 00:08:50.525 03:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:50.525 03:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63311 00:08:50.525 03:11:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 63311 ']' 00:08:50.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:50.525 03:11:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:50.525 03:11:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:50.525 03:11:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:50.525 03:11:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:50.525 03:11:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:50.785 [2024-10-09 03:11:33.899506] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:08:50.785 [2024-10-09 03:11:33.899722] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63311 ] 00:08:50.785 [2024-10-09 03:11:34.062756] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.104 [2024-10-09 03:11:34.327876] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:51.363 [2024-10-09 03:11:34.566316] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.363 [2024-10-09 03:11:34.566449] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:51.623 03:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:51.623 03:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:08:51.623 03:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:51.623 03:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:51.623 03:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:51.623 03:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:51.623 03:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:51.623 03:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:51.623 03:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:51.623 03:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:51.623 03:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:51.623 03:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.623 03:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.623 malloc1 00:08:51.623 03:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.623 03:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:51.623 03:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.623 03:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.623 [2024-10-09 03:11:34.782669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:51.623 [2024-10-09 03:11:34.782762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.623 [2024-10-09 03:11:34.782791] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:51.623 [2024-10-09 03:11:34.782804] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.623 [2024-10-09 03:11:34.785391] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.623 pt1 00:08:51.623 [2024-10-09 03:11:34.785532] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:51.623 03:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.623 03:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:51.623 03:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:51.623 03:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:51.624 03:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:51.624 03:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:51.624 03:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:51.624 03:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:51.624 03:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:51.624 03:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:51.624 03:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.624 03:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.624 malloc2 00:08:51.624 03:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.624 03:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:51.624 03:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.624 03:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.624 [2024-10-09 03:11:34.853735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:51.624 [2024-10-09 03:11:34.853902] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:51.624 [2024-10-09 03:11:34.853945] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:51.624 [2024-10-09 03:11:34.853981] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:51.624 [2024-10-09 03:11:34.856454] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:51.624 [2024-10-09 03:11:34.856527] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:51.624 pt2 00:08:51.624 03:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.624 03:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:51.624 03:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:51.624 03:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:51.624 03:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.624 03:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.624 [2024-10-09 03:11:34.865786] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:51.624 [2024-10-09 03:11:34.867966] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:51.624 [2024-10-09 03:11:34.868187] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:51.624 [2024-10-09 03:11:34.868235] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:51.624 [2024-10-09 03:11:34.868527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:51.624 [2024-10-09 03:11:34.868764] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:51.624 [2024-10-09 03:11:34.868808] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:51.624 [2024-10-09 03:11:34.869050] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:51.624 03:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.624 03:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:51.624 03:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:51.624 03:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:51.624 03:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:51.624 03:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:51.624 03:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:51.624 03:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:51.624 03:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:51.624 03:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:51.624 03:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:51.624 03:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:51.624 03:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:51.624 03:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:51.624 03:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.624 03:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:51.624 03:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:51.624 "name": "raid_bdev1", 00:08:51.624 "uuid": "bd32ae4f-18c6-425a-890e-4df3d24998da", 00:08:51.624 "strip_size_kb": 0, 00:08:51.624 "state": "online", 00:08:51.624 "raid_level": "raid1", 00:08:51.624 "superblock": true, 00:08:51.624 "num_base_bdevs": 2, 00:08:51.624 "num_base_bdevs_discovered": 2, 00:08:51.624 "num_base_bdevs_operational": 2, 00:08:51.624 "base_bdevs_list": [ 00:08:51.624 { 00:08:51.624 "name": "pt1", 00:08:51.624 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:51.624 "is_configured": true, 00:08:51.624 "data_offset": 2048, 00:08:51.624 "data_size": 63488 00:08:51.624 }, 00:08:51.624 { 00:08:51.624 "name": "pt2", 00:08:51.624 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:51.624 "is_configured": true, 00:08:51.624 "data_offset": 2048, 00:08:51.624 "data_size": 63488 00:08:51.624 } 00:08:51.624 ] 00:08:51.624 }' 00:08:51.624 03:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:51.624 03:11:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.219 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:52.219 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:52.219 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:52.219 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:52.219 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:52.219 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:52.219 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:52.219 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:52.219 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.219 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.219 [2024-10-09 03:11:35.253437] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:52.219 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.219 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:52.219 "name": "raid_bdev1", 00:08:52.219 "aliases": [ 00:08:52.219 "bd32ae4f-18c6-425a-890e-4df3d24998da" 00:08:52.219 ], 00:08:52.219 "product_name": "Raid Volume", 00:08:52.219 "block_size": 512, 00:08:52.219 "num_blocks": 63488, 00:08:52.219 "uuid": "bd32ae4f-18c6-425a-890e-4df3d24998da", 00:08:52.219 "assigned_rate_limits": { 00:08:52.219 "rw_ios_per_sec": 0, 00:08:52.219 "rw_mbytes_per_sec": 0, 00:08:52.219 "r_mbytes_per_sec": 0, 00:08:52.219 "w_mbytes_per_sec": 0 00:08:52.219 }, 00:08:52.219 "claimed": false, 00:08:52.219 "zoned": false, 00:08:52.219 "supported_io_types": { 00:08:52.219 "read": true, 00:08:52.219 "write": true, 00:08:52.219 "unmap": false, 00:08:52.219 "flush": false, 00:08:52.219 "reset": true, 00:08:52.219 "nvme_admin": false, 00:08:52.219 "nvme_io": false, 00:08:52.219 "nvme_io_md": false, 00:08:52.219 "write_zeroes": true, 00:08:52.219 "zcopy": false, 00:08:52.219 "get_zone_info": false, 00:08:52.219 "zone_management": false, 00:08:52.219 "zone_append": false, 00:08:52.219 "compare": false, 00:08:52.219 "compare_and_write": false, 00:08:52.219 "abort": false, 00:08:52.219 "seek_hole": false, 00:08:52.219 "seek_data": false, 00:08:52.219 "copy": false, 00:08:52.219 "nvme_iov_md": false 00:08:52.219 }, 00:08:52.219 "memory_domains": [ 00:08:52.219 { 00:08:52.219 "dma_device_id": "system", 00:08:52.219 "dma_device_type": 1 00:08:52.219 }, 00:08:52.219 { 00:08:52.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.219 "dma_device_type": 2 00:08:52.219 }, 00:08:52.219 { 00:08:52.219 "dma_device_id": "system", 00:08:52.219 "dma_device_type": 1 00:08:52.219 }, 00:08:52.219 { 00:08:52.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:52.219 "dma_device_type": 2 00:08:52.219 } 00:08:52.219 ], 00:08:52.219 "driver_specific": { 00:08:52.219 "raid": { 00:08:52.219 "uuid": "bd32ae4f-18c6-425a-890e-4df3d24998da", 00:08:52.219 "strip_size_kb": 0, 00:08:52.219 "state": "online", 00:08:52.219 "raid_level": "raid1", 00:08:52.219 "superblock": true, 00:08:52.219 "num_base_bdevs": 2, 00:08:52.219 "num_base_bdevs_discovered": 2, 00:08:52.219 "num_base_bdevs_operational": 2, 00:08:52.219 "base_bdevs_list": [ 00:08:52.219 { 00:08:52.219 "name": "pt1", 00:08:52.219 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:52.219 "is_configured": true, 00:08:52.219 "data_offset": 2048, 00:08:52.219 "data_size": 63488 00:08:52.219 }, 00:08:52.219 { 00:08:52.219 "name": "pt2", 00:08:52.219 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:52.219 "is_configured": true, 00:08:52.219 "data_offset": 2048, 00:08:52.219 "data_size": 63488 00:08:52.219 } 00:08:52.219 ] 00:08:52.219 } 00:08:52.219 } 00:08:52.219 }' 00:08:52.219 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:52.219 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:52.219 pt2' 00:08:52.219 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.219 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:52.219 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.219 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:52.219 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.219 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.219 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.219 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.219 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.219 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.219 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:52.219 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:52.219 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:52.219 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.219 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.219 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.219 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:52.219 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:52.219 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:52.219 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.219 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.219 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:52.219 [2024-10-09 03:11:35.497012] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:52.219 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bd32ae4f-18c6-425a-890e-4df3d24998da 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z bd32ae4f-18c6-425a-890e-4df3d24998da ']' 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.480 [2024-10-09 03:11:35.544617] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:52.480 [2024-10-09 03:11:35.544667] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:52.480 [2024-10-09 03:11:35.544785] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:52.480 [2024-10-09 03:11:35.544876] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:52.480 [2024-10-09 03:11:35.544890] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.480 [2024-10-09 03:11:35.684448] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:52.480 [2024-10-09 03:11:35.686683] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:52.480 [2024-10-09 03:11:35.686762] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:52.480 [2024-10-09 03:11:35.686829] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:52.480 [2024-10-09 03:11:35.686941] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:52.480 [2024-10-09 03:11:35.686972] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:52.480 request: 00:08:52.480 { 00:08:52.480 "name": "raid_bdev1", 00:08:52.480 "raid_level": "raid1", 00:08:52.480 "base_bdevs": [ 00:08:52.480 "malloc1", 00:08:52.480 "malloc2" 00:08:52.480 ], 00:08:52.480 "superblock": false, 00:08:52.480 "method": "bdev_raid_create", 00:08:52.480 "req_id": 1 00:08:52.480 } 00:08:52.480 Got JSON-RPC error response 00:08:52.480 response: 00:08:52.480 { 00:08:52.480 "code": -17, 00:08:52.480 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:52.480 } 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.480 [2024-10-09 03:11:35.748241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:52.480 [2024-10-09 03:11:35.748337] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.480 [2024-10-09 03:11:35.748359] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:52.480 [2024-10-09 03:11:35.748371] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.480 [2024-10-09 03:11:35.750981] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.480 [2024-10-09 03:11:35.751023] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:52.480 [2024-10-09 03:11:35.751126] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:52.480 [2024-10-09 03:11:35.751200] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:52.480 pt1 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:52.480 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:52.481 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.481 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.481 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.481 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.481 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.481 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:52.481 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.481 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.481 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.739 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.739 "name": "raid_bdev1", 00:08:52.739 "uuid": "bd32ae4f-18c6-425a-890e-4df3d24998da", 00:08:52.739 "strip_size_kb": 0, 00:08:52.739 "state": "configuring", 00:08:52.739 "raid_level": "raid1", 00:08:52.739 "superblock": true, 00:08:52.739 "num_base_bdevs": 2, 00:08:52.739 "num_base_bdevs_discovered": 1, 00:08:52.739 "num_base_bdevs_operational": 2, 00:08:52.739 "base_bdevs_list": [ 00:08:52.739 { 00:08:52.739 "name": "pt1", 00:08:52.739 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:52.739 "is_configured": true, 00:08:52.739 "data_offset": 2048, 00:08:52.739 "data_size": 63488 00:08:52.739 }, 00:08:52.739 { 00:08:52.739 "name": null, 00:08:52.739 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:52.739 "is_configured": false, 00:08:52.739 "data_offset": 2048, 00:08:52.739 "data_size": 63488 00:08:52.739 } 00:08:52.739 ] 00:08:52.739 }' 00:08:52.739 03:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.739 03:11:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.999 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:52.999 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:52.999 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:52.999 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:52.999 03:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.999 03:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.999 [2024-10-09 03:11:36.179512] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:52.999 [2024-10-09 03:11:36.179719] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.999 [2024-10-09 03:11:36.179763] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:52.999 [2024-10-09 03:11:36.179801] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.999 [2024-10-09 03:11:36.180426] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.999 [2024-10-09 03:11:36.180505] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:52.999 [2024-10-09 03:11:36.180636] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:52.999 [2024-10-09 03:11:36.180697] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:52.999 [2024-10-09 03:11:36.180871] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:52.999 [2024-10-09 03:11:36.180912] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:52.999 [2024-10-09 03:11:36.181207] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:52.999 [2024-10-09 03:11:36.181418] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:52.999 [2024-10-09 03:11:36.181460] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:52.999 [2024-10-09 03:11:36.181655] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:52.999 pt2 00:08:52.999 03:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.999 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:52.999 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:52.999 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:52.999 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:52.999 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:52.999 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:52.999 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:52.999 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:52.999 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.999 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.999 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.999 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.999 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:52.999 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.999 03:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:52.999 03:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.999 03:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:52.999 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.999 "name": "raid_bdev1", 00:08:52.999 "uuid": "bd32ae4f-18c6-425a-890e-4df3d24998da", 00:08:52.999 "strip_size_kb": 0, 00:08:52.999 "state": "online", 00:08:52.999 "raid_level": "raid1", 00:08:52.999 "superblock": true, 00:08:52.999 "num_base_bdevs": 2, 00:08:52.999 "num_base_bdevs_discovered": 2, 00:08:52.999 "num_base_bdevs_operational": 2, 00:08:52.999 "base_bdevs_list": [ 00:08:52.999 { 00:08:52.999 "name": "pt1", 00:08:52.999 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:52.999 "is_configured": true, 00:08:52.999 "data_offset": 2048, 00:08:52.999 "data_size": 63488 00:08:52.999 }, 00:08:52.999 { 00:08:52.999 "name": "pt2", 00:08:52.999 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:52.999 "is_configured": true, 00:08:52.999 "data_offset": 2048, 00:08:52.999 "data_size": 63488 00:08:52.999 } 00:08:52.999 ] 00:08:52.999 }' 00:08:53.000 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.000 03:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.569 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:53.569 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:53.569 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:53.569 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:53.569 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:53.569 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:53.569 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:53.569 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:53.569 03:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.569 03:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.569 [2024-10-09 03:11:36.631047] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:53.569 03:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.569 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:53.569 "name": "raid_bdev1", 00:08:53.569 "aliases": [ 00:08:53.569 "bd32ae4f-18c6-425a-890e-4df3d24998da" 00:08:53.569 ], 00:08:53.569 "product_name": "Raid Volume", 00:08:53.569 "block_size": 512, 00:08:53.569 "num_blocks": 63488, 00:08:53.569 "uuid": "bd32ae4f-18c6-425a-890e-4df3d24998da", 00:08:53.569 "assigned_rate_limits": { 00:08:53.569 "rw_ios_per_sec": 0, 00:08:53.569 "rw_mbytes_per_sec": 0, 00:08:53.569 "r_mbytes_per_sec": 0, 00:08:53.569 "w_mbytes_per_sec": 0 00:08:53.569 }, 00:08:53.569 "claimed": false, 00:08:53.569 "zoned": false, 00:08:53.569 "supported_io_types": { 00:08:53.569 "read": true, 00:08:53.569 "write": true, 00:08:53.569 "unmap": false, 00:08:53.569 "flush": false, 00:08:53.569 "reset": true, 00:08:53.569 "nvme_admin": false, 00:08:53.569 "nvme_io": false, 00:08:53.569 "nvme_io_md": false, 00:08:53.569 "write_zeroes": true, 00:08:53.569 "zcopy": false, 00:08:53.569 "get_zone_info": false, 00:08:53.569 "zone_management": false, 00:08:53.569 "zone_append": false, 00:08:53.569 "compare": false, 00:08:53.569 "compare_and_write": false, 00:08:53.569 "abort": false, 00:08:53.569 "seek_hole": false, 00:08:53.569 "seek_data": false, 00:08:53.569 "copy": false, 00:08:53.569 "nvme_iov_md": false 00:08:53.569 }, 00:08:53.569 "memory_domains": [ 00:08:53.569 { 00:08:53.569 "dma_device_id": "system", 00:08:53.569 "dma_device_type": 1 00:08:53.569 }, 00:08:53.569 { 00:08:53.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.569 "dma_device_type": 2 00:08:53.569 }, 00:08:53.569 { 00:08:53.569 "dma_device_id": "system", 00:08:53.569 "dma_device_type": 1 00:08:53.569 }, 00:08:53.569 { 00:08:53.569 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:53.569 "dma_device_type": 2 00:08:53.569 } 00:08:53.569 ], 00:08:53.569 "driver_specific": { 00:08:53.569 "raid": { 00:08:53.569 "uuid": "bd32ae4f-18c6-425a-890e-4df3d24998da", 00:08:53.569 "strip_size_kb": 0, 00:08:53.569 "state": "online", 00:08:53.569 "raid_level": "raid1", 00:08:53.569 "superblock": true, 00:08:53.569 "num_base_bdevs": 2, 00:08:53.569 "num_base_bdevs_discovered": 2, 00:08:53.569 "num_base_bdevs_operational": 2, 00:08:53.569 "base_bdevs_list": [ 00:08:53.569 { 00:08:53.569 "name": "pt1", 00:08:53.569 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:53.569 "is_configured": true, 00:08:53.569 "data_offset": 2048, 00:08:53.569 "data_size": 63488 00:08:53.569 }, 00:08:53.569 { 00:08:53.569 "name": "pt2", 00:08:53.569 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:53.569 "is_configured": true, 00:08:53.569 "data_offset": 2048, 00:08:53.569 "data_size": 63488 00:08:53.569 } 00:08:53.569 ] 00:08:53.569 } 00:08:53.569 } 00:08:53.569 }' 00:08:53.569 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:53.569 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:53.569 pt2' 00:08:53.569 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.569 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:53.569 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:53.569 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:53.569 03:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.569 03:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.569 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.569 03:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.569 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:53.569 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:53.569 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:53.569 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:53.569 03:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.569 03:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.569 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:53.569 03:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.569 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:53.569 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:53.569 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:53.569 03:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.569 03:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.569 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:53.569 [2024-10-09 03:11:36.786657] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:53.570 03:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.570 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' bd32ae4f-18c6-425a-890e-4df3d24998da '!=' bd32ae4f-18c6-425a-890e-4df3d24998da ']' 00:08:53.570 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:08:53.570 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:53.570 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:08:53.570 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:08:53.570 03:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.570 03:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.570 [2024-10-09 03:11:36.834379] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:53.570 03:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.570 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:53.570 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:53.570 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:53.570 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:53.570 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:53.570 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:53.570 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:53.570 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:53.570 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:53.570 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:53.570 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:53.570 03:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:53.570 03:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.570 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:53.570 03:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:53.829 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:53.829 "name": "raid_bdev1", 00:08:53.829 "uuid": "bd32ae4f-18c6-425a-890e-4df3d24998da", 00:08:53.829 "strip_size_kb": 0, 00:08:53.829 "state": "online", 00:08:53.829 "raid_level": "raid1", 00:08:53.829 "superblock": true, 00:08:53.829 "num_base_bdevs": 2, 00:08:53.829 "num_base_bdevs_discovered": 1, 00:08:53.829 "num_base_bdevs_operational": 1, 00:08:53.829 "base_bdevs_list": [ 00:08:53.829 { 00:08:53.829 "name": null, 00:08:53.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:53.829 "is_configured": false, 00:08:53.829 "data_offset": 0, 00:08:53.829 "data_size": 63488 00:08:53.829 }, 00:08:53.829 { 00:08:53.829 "name": "pt2", 00:08:53.829 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:53.829 "is_configured": true, 00:08:53.829 "data_offset": 2048, 00:08:53.829 "data_size": 63488 00:08:53.829 } 00:08:53.829 ] 00:08:53.829 }' 00:08:53.829 03:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:53.829 03:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.088 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:54.088 03:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.088 03:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.088 [2024-10-09 03:11:37.221728] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:54.088 [2024-10-09 03:11:37.221875] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:54.088 [2024-10-09 03:11:37.222005] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:54.088 [2024-10-09 03:11:37.222086] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:54.088 [2024-10-09 03:11:37.222133] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:54.088 03:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.088 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.088 03:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.088 03:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.088 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:08:54.088 03:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.088 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:08:54.088 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:08:54.088 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:08:54.088 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:54.088 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:08:54.088 03:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.088 03:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.088 03:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.088 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:08:54.088 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:08:54.088 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:08:54.088 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:08:54.088 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:08:54.089 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:54.089 03:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.089 03:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.089 [2024-10-09 03:11:37.297558] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:54.089 [2024-10-09 03:11:37.297704] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.089 [2024-10-09 03:11:37.297727] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:08:54.089 [2024-10-09 03:11:37.297740] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.089 [2024-10-09 03:11:37.300375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.089 [2024-10-09 03:11:37.300415] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:54.089 [2024-10-09 03:11:37.300506] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:54.089 [2024-10-09 03:11:37.300566] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:54.089 [2024-10-09 03:11:37.300676] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:08:54.089 [2024-10-09 03:11:37.300689] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:54.089 [2024-10-09 03:11:37.300958] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:54.089 [2024-10-09 03:11:37.301152] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:08:54.089 [2024-10-09 03:11:37.301163] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:08:54.089 [2024-10-09 03:11:37.301322] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.089 pt2 00:08:54.089 03:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.089 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:54.089 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:54.089 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.089 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:54.089 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:54.089 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:54.089 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.089 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.089 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.089 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.089 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.089 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:54.089 03:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.089 03:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.089 03:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.089 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.089 "name": "raid_bdev1", 00:08:54.089 "uuid": "bd32ae4f-18c6-425a-890e-4df3d24998da", 00:08:54.089 "strip_size_kb": 0, 00:08:54.089 "state": "online", 00:08:54.089 "raid_level": "raid1", 00:08:54.089 "superblock": true, 00:08:54.089 "num_base_bdevs": 2, 00:08:54.089 "num_base_bdevs_discovered": 1, 00:08:54.089 "num_base_bdevs_operational": 1, 00:08:54.089 "base_bdevs_list": [ 00:08:54.089 { 00:08:54.089 "name": null, 00:08:54.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.089 "is_configured": false, 00:08:54.089 "data_offset": 2048, 00:08:54.089 "data_size": 63488 00:08:54.089 }, 00:08:54.089 { 00:08:54.089 "name": "pt2", 00:08:54.089 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:54.089 "is_configured": true, 00:08:54.089 "data_offset": 2048, 00:08:54.089 "data_size": 63488 00:08:54.089 } 00:08:54.089 ] 00:08:54.089 }' 00:08:54.089 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.089 03:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.658 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:54.658 03:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.658 03:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.658 [2024-10-09 03:11:37.744968] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:54.658 [2024-10-09 03:11:37.745103] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:54.658 [2024-10-09 03:11:37.745226] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:54.658 [2024-10-09 03:11:37.745302] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:54.658 [2024-10-09 03:11:37.745366] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:08:54.658 03:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.658 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.658 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:08:54.658 03:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.658 03:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.658 03:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.658 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:08:54.658 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:08:54.658 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:08:54.658 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:54.658 03:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.658 03:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.658 [2024-10-09 03:11:37.800934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:54.658 [2024-10-09 03:11:37.801041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:54.658 [2024-10-09 03:11:37.801066] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:08:54.658 [2024-10-09 03:11:37.801075] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:54.658 [2024-10-09 03:11:37.803637] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:54.658 [2024-10-09 03:11:37.803678] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:54.658 [2024-10-09 03:11:37.803791] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:54.658 [2024-10-09 03:11:37.803856] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:54.658 [2024-10-09 03:11:37.804018] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:54.658 [2024-10-09 03:11:37.804029] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:54.658 [2024-10-09 03:11:37.804050] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:08:54.658 [2024-10-09 03:11:37.804124] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:54.658 [2024-10-09 03:11:37.804312] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:08:54.658 [2024-10-09 03:11:37.804326] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:54.658 [2024-10-09 03:11:37.804569] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:08:54.658 [2024-10-09 03:11:37.804712] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:08:54.658 [2024-10-09 03:11:37.804741] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:08:54.658 [2024-10-09 03:11:37.804959] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.658 pt1 00:08:54.658 03:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.658 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:08:54.658 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:54.658 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:54.658 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.658 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:54.658 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:54.658 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:54.658 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.658 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.658 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.658 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.658 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.658 03:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.658 03:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.658 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:54.658 03:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.658 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.658 "name": "raid_bdev1", 00:08:54.658 "uuid": "bd32ae4f-18c6-425a-890e-4df3d24998da", 00:08:54.658 "strip_size_kb": 0, 00:08:54.659 "state": "online", 00:08:54.659 "raid_level": "raid1", 00:08:54.659 "superblock": true, 00:08:54.659 "num_base_bdevs": 2, 00:08:54.659 "num_base_bdevs_discovered": 1, 00:08:54.659 "num_base_bdevs_operational": 1, 00:08:54.659 "base_bdevs_list": [ 00:08:54.659 { 00:08:54.659 "name": null, 00:08:54.659 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:54.659 "is_configured": false, 00:08:54.659 "data_offset": 2048, 00:08:54.659 "data_size": 63488 00:08:54.659 }, 00:08:54.659 { 00:08:54.659 "name": "pt2", 00:08:54.659 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:54.659 "is_configured": true, 00:08:54.659 "data_offset": 2048, 00:08:54.659 "data_size": 63488 00:08:54.659 } 00:08:54.659 ] 00:08:54.659 }' 00:08:54.659 03:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.659 03:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.228 03:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:55.228 03:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:55.228 03:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.228 03:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.228 03:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.228 03:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:08:55.228 03:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:55.228 03:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:08:55.228 03:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:55.228 03:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:55.228 [2024-10-09 03:11:38.268311] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:55.228 03:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:55.228 03:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' bd32ae4f-18c6-425a-890e-4df3d24998da '!=' bd32ae4f-18c6-425a-890e-4df3d24998da ']' 00:08:55.228 03:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63311 00:08:55.228 03:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 63311 ']' 00:08:55.228 03:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 63311 00:08:55.228 03:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:08:55.228 03:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:55.228 03:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63311 00:08:55.228 killing process with pid 63311 00:08:55.228 03:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:55.228 03:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:55.228 03:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63311' 00:08:55.228 03:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 63311 00:08:55.228 [2024-10-09 03:11:38.334409] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:55.228 [2024-10-09 03:11:38.334534] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:55.228 03:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 63311 00:08:55.228 [2024-10-09 03:11:38.334589] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:55.228 [2024-10-09 03:11:38.334611] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:08:55.487 [2024-10-09 03:11:38.559524] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:56.868 03:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:56.868 00:08:56.868 real 0m6.105s 00:08:56.868 user 0m8.962s 00:08:56.868 sys 0m1.031s 00:08:56.868 03:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:56.868 03:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.868 ************************************ 00:08:56.868 END TEST raid_superblock_test 00:08:56.868 ************************************ 00:08:56.868 03:11:39 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:56.868 03:11:39 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:08:56.868 03:11:39 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:56.868 03:11:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:56.868 ************************************ 00:08:56.868 START TEST raid_read_error_test 00:08:56.868 ************************************ 00:08:56.868 03:11:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 read 00:08:56.868 03:11:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:08:56.868 03:11:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:56.868 03:11:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:56.868 03:11:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:56.868 03:11:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:56.868 03:11:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:56.868 03:11:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:56.868 03:11:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:56.868 03:11:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:56.868 03:11:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:56.869 03:11:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:56.869 03:11:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:56.869 03:11:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:56.869 03:11:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:56.869 03:11:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:56.869 03:11:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:56.869 03:11:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:56.869 03:11:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:56.869 03:11:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:08:56.869 03:11:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:08:56.869 03:11:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:56.869 03:11:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.8BGz2vh7A1 00:08:56.869 03:11:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63641 00:08:56.869 03:11:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:56.869 03:11:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63641 00:08:56.869 03:11:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 63641 ']' 00:08:56.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.869 03:11:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.869 03:11:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:56.869 03:11:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.869 03:11:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:56.869 03:11:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.869 [2024-10-09 03:11:40.087749] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:08:56.869 [2024-10-09 03:11:40.087885] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63641 ] 00:08:57.128 [2024-10-09 03:11:40.257126] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.387 [2024-10-09 03:11:40.513539] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.646 [2024-10-09 03:11:40.738816] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.646 [2024-10-09 03:11:40.738869] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.647 03:11:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:57.647 03:11:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:08:57.647 03:11:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:57.647 03:11:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:57.647 03:11:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.647 03:11:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.908 BaseBdev1_malloc 00:08:57.908 03:11:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.908 03:11:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:57.908 03:11:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.908 03:11:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.908 true 00:08:57.908 03:11:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.908 03:11:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:57.908 03:11:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.908 03:11:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.908 [2024-10-09 03:11:40.979469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:57.908 [2024-10-09 03:11:40.979537] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.908 [2024-10-09 03:11:40.979555] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:57.908 [2024-10-09 03:11:40.979566] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.908 [2024-10-09 03:11:40.982080] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.908 [2024-10-09 03:11:40.982223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:57.908 BaseBdev1 00:08:57.908 03:11:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.908 03:11:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:57.908 03:11:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:57.908 03:11:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.908 03:11:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.908 BaseBdev2_malloc 00:08:57.908 03:11:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.908 03:11:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:57.908 03:11:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.908 03:11:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.908 true 00:08:57.908 03:11:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.908 03:11:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:57.908 03:11:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.908 03:11:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.908 [2024-10-09 03:11:41.063795] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:57.908 [2024-10-09 03:11:41.063949] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.908 [2024-10-09 03:11:41.063970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:57.908 [2024-10-09 03:11:41.063982] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.908 [2024-10-09 03:11:41.066354] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.908 [2024-10-09 03:11:41.066393] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:57.908 BaseBdev2 00:08:57.908 03:11:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.908 03:11:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:57.908 03:11:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.908 03:11:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.908 [2024-10-09 03:11:41.075865] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:57.908 [2024-10-09 03:11:41.077950] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:57.908 [2024-10-09 03:11:41.078155] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:57.908 [2024-10-09 03:11:41.078170] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:57.908 [2024-10-09 03:11:41.078397] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:57.908 [2024-10-09 03:11:41.078566] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:57.908 [2024-10-09 03:11:41.078577] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:57.908 [2024-10-09 03:11:41.078731] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:57.908 03:11:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.908 03:11:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:57.908 03:11:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:57.908 03:11:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:57.908 03:11:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:57.908 03:11:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:57.908 03:11:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:57.908 03:11:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.908 03:11:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.908 03:11:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.908 03:11:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.908 03:11:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.908 03:11:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:57.908 03:11:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:57.908 03:11:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.908 03:11:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:57.908 03:11:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.908 "name": "raid_bdev1", 00:08:57.908 "uuid": "bcc7cf49-d1d1-4dea-988a-92a752bf406f", 00:08:57.908 "strip_size_kb": 0, 00:08:57.908 "state": "online", 00:08:57.908 "raid_level": "raid1", 00:08:57.908 "superblock": true, 00:08:57.908 "num_base_bdevs": 2, 00:08:57.908 "num_base_bdevs_discovered": 2, 00:08:57.908 "num_base_bdevs_operational": 2, 00:08:57.908 "base_bdevs_list": [ 00:08:57.908 { 00:08:57.908 "name": "BaseBdev1", 00:08:57.908 "uuid": "1d16636b-89a2-5c50-87fa-3c3f38dbeeac", 00:08:57.908 "is_configured": true, 00:08:57.908 "data_offset": 2048, 00:08:57.908 "data_size": 63488 00:08:57.908 }, 00:08:57.908 { 00:08:57.908 "name": "BaseBdev2", 00:08:57.908 "uuid": "2a20a2af-88c7-5ee9-9134-5ea15c107d80", 00:08:57.908 "is_configured": true, 00:08:57.908 "data_offset": 2048, 00:08:57.908 "data_size": 63488 00:08:57.908 } 00:08:57.908 ] 00:08:57.908 }' 00:08:57.908 03:11:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.908 03:11:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.478 03:11:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:58.478 03:11:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:58.478 [2024-10-09 03:11:41.576321] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:59.417 03:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:59.417 03:11:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.417 03:11:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.417 03:11:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.417 03:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:59.417 03:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:08:59.417 03:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:08:59.417 03:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:59.417 03:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:59.417 03:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:59.417 03:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:59.417 03:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:08:59.417 03:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:08:59.417 03:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:59.417 03:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:59.417 03:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:59.417 03:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:59.417 03:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:59.417 03:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:59.417 03:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:59.417 03:11:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.417 03:11:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.417 03:11:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.417 03:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:59.417 "name": "raid_bdev1", 00:08:59.417 "uuid": "bcc7cf49-d1d1-4dea-988a-92a752bf406f", 00:08:59.417 "strip_size_kb": 0, 00:08:59.417 "state": "online", 00:08:59.417 "raid_level": "raid1", 00:08:59.417 "superblock": true, 00:08:59.417 "num_base_bdevs": 2, 00:08:59.417 "num_base_bdevs_discovered": 2, 00:08:59.417 "num_base_bdevs_operational": 2, 00:08:59.417 "base_bdevs_list": [ 00:08:59.417 { 00:08:59.417 "name": "BaseBdev1", 00:08:59.417 "uuid": "1d16636b-89a2-5c50-87fa-3c3f38dbeeac", 00:08:59.417 "is_configured": true, 00:08:59.417 "data_offset": 2048, 00:08:59.417 "data_size": 63488 00:08:59.417 }, 00:08:59.417 { 00:08:59.417 "name": "BaseBdev2", 00:08:59.417 "uuid": "2a20a2af-88c7-5ee9-9134-5ea15c107d80", 00:08:59.417 "is_configured": true, 00:08:59.417 "data_offset": 2048, 00:08:59.417 "data_size": 63488 00:08:59.417 } 00:08:59.417 ] 00:08:59.417 }' 00:08:59.417 03:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:59.417 03:11:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.678 03:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:59.678 03:11:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:59.678 03:11:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.678 [2024-10-09 03:11:42.905606] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:59.678 [2024-10-09 03:11:42.905746] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:59.678 [2024-10-09 03:11:42.908331] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:59.678 [2024-10-09 03:11:42.908419] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.678 [2024-10-09 03:11:42.908522] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:59.678 [2024-10-09 03:11:42.908592] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:59.678 { 00:08:59.678 "results": [ 00:08:59.678 { 00:08:59.678 "job": "raid_bdev1", 00:08:59.678 "core_mask": "0x1", 00:08:59.678 "workload": "randrw", 00:08:59.678 "percentage": 50, 00:08:59.678 "status": "finished", 00:08:59.678 "queue_depth": 1, 00:08:59.678 "io_size": 131072, 00:08:59.678 "runtime": 1.329882, 00:08:59.678 "iops": 14793.793735083264, 00:08:59.678 "mibps": 1849.224216885408, 00:08:59.678 "io_failed": 0, 00:08:59.678 "io_timeout": 0, 00:08:59.678 "avg_latency_us": 65.07370257467463, 00:08:59.678 "min_latency_us": 23.58777292576419, 00:08:59.678 "max_latency_us": 1409.4532751091704 00:08:59.678 } 00:08:59.678 ], 00:08:59.678 "core_count": 1 00:08:59.678 } 00:08:59.678 03:11:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:59.678 03:11:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63641 00:08:59.678 03:11:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 63641 ']' 00:08:59.678 03:11:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 63641 00:08:59.678 03:11:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:08:59.678 03:11:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:59.678 03:11:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63641 00:08:59.678 03:11:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:59.678 03:11:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:59.678 03:11:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63641' 00:08:59.678 killing process with pid 63641 00:08:59.678 03:11:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 63641 00:08:59.678 [2024-10-09 03:11:42.953425] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:59.678 03:11:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 63641 00:08:59.938 [2024-10-09 03:11:43.103939] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:01.320 03:11:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.8BGz2vh7A1 00:09:01.320 03:11:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:01.320 03:11:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:01.320 03:11:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:01.320 03:11:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:01.320 ************************************ 00:09:01.320 END TEST raid_read_error_test 00:09:01.320 ************************************ 00:09:01.320 03:11:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:01.320 03:11:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:01.320 03:11:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:01.320 00:09:01.320 real 0m4.571s 00:09:01.320 user 0m5.202s 00:09:01.320 sys 0m0.680s 00:09:01.320 03:11:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:01.320 03:11:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.320 03:11:44 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:09:01.320 03:11:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:01.320 03:11:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:01.320 03:11:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:01.320 ************************************ 00:09:01.320 START TEST raid_write_error_test 00:09:01.320 ************************************ 00:09:01.320 03:11:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 2 write 00:09:01.320 03:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:01.320 03:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:01.320 03:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:01.320 03:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:01.320 03:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:01.320 03:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:01.320 03:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:01.320 03:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:01.320 03:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:01.320 03:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:01.320 03:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:01.580 03:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:01.580 03:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:01.580 03:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:01.580 03:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:01.580 03:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:01.580 03:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:01.580 03:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:01.580 03:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:01.580 03:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:01.580 03:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:01.580 03:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.s39KN1lqHx 00:09:01.580 03:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63781 00:09:01.580 03:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:01.580 03:11:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63781 00:09:01.580 03:11:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 63781 ']' 00:09:01.580 03:11:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.580 03:11:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:01.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.580 03:11:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.580 03:11:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:01.580 03:11:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.580 [2024-10-09 03:11:44.722357] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:09:01.580 [2024-10-09 03:11:44.722471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63781 ] 00:09:01.840 [2024-10-09 03:11:44.885970] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.099 [2024-10-09 03:11:45.142804] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.099 [2024-10-09 03:11:45.384189] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.099 [2024-10-09 03:11:45.384237] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:02.358 03:11:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:02.358 03:11:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:02.358 03:11:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:02.358 03:11:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:02.358 03:11:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.358 03:11:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.358 BaseBdev1_malloc 00:09:02.358 03:11:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.358 03:11:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:02.358 03:11:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.358 03:11:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.358 true 00:09:02.358 03:11:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.358 03:11:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:02.358 03:11:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.358 03:11:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.358 [2024-10-09 03:11:45.620310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:02.358 [2024-10-09 03:11:45.620378] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.358 [2024-10-09 03:11:45.620397] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:02.358 [2024-10-09 03:11:45.620409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.358 [2024-10-09 03:11:45.622792] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.358 [2024-10-09 03:11:45.622833] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:02.358 BaseBdev1 00:09:02.358 03:11:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.358 03:11:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:02.358 03:11:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:02.358 03:11:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.358 03:11:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.618 BaseBdev2_malloc 00:09:02.618 03:11:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.618 03:11:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:02.618 03:11:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.618 03:11:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.618 true 00:09:02.618 03:11:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.618 03:11:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:02.618 03:11:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.618 03:11:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.618 [2024-10-09 03:11:45.707945] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:02.618 [2024-10-09 03:11:45.708078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.618 [2024-10-09 03:11:45.708098] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:02.618 [2024-10-09 03:11:45.708109] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.618 [2024-10-09 03:11:45.710483] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.618 [2024-10-09 03:11:45.710521] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:02.618 BaseBdev2 00:09:02.618 03:11:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.618 03:11:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:02.618 03:11:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.618 03:11:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.618 [2024-10-09 03:11:45.720010] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:02.618 [2024-10-09 03:11:45.722080] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:02.618 [2024-10-09 03:11:45.722301] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:02.618 [2024-10-09 03:11:45.722317] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:02.618 [2024-10-09 03:11:45.722545] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:02.618 [2024-10-09 03:11:45.722727] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:02.618 [2024-10-09 03:11:45.722738] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:02.618 [2024-10-09 03:11:45.722918] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:02.618 03:11:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.618 03:11:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:02.618 03:11:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:02.618 03:11:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:02.618 03:11:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:02.618 03:11:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:02.618 03:11:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:02.618 03:11:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.618 03:11:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.618 03:11:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.618 03:11:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.618 03:11:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.618 03:11:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:02.618 03:11:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:02.618 03:11:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.618 03:11:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:02.618 03:11:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.618 "name": "raid_bdev1", 00:09:02.618 "uuid": "dbda6473-dbc9-4fa8-892d-323c50afd25d", 00:09:02.618 "strip_size_kb": 0, 00:09:02.618 "state": "online", 00:09:02.618 "raid_level": "raid1", 00:09:02.618 "superblock": true, 00:09:02.618 "num_base_bdevs": 2, 00:09:02.618 "num_base_bdevs_discovered": 2, 00:09:02.618 "num_base_bdevs_operational": 2, 00:09:02.618 "base_bdevs_list": [ 00:09:02.618 { 00:09:02.618 "name": "BaseBdev1", 00:09:02.618 "uuid": "f7c1e784-3bec-56f1-a56b-433bcd765e6a", 00:09:02.619 "is_configured": true, 00:09:02.619 "data_offset": 2048, 00:09:02.619 "data_size": 63488 00:09:02.619 }, 00:09:02.619 { 00:09:02.619 "name": "BaseBdev2", 00:09:02.619 "uuid": "d8846bf6-cb44-5676-8e5a-d27a2a721b49", 00:09:02.619 "is_configured": true, 00:09:02.619 "data_offset": 2048, 00:09:02.619 "data_size": 63488 00:09:02.619 } 00:09:02.619 ] 00:09:02.619 }' 00:09:02.619 03:11:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.619 03:11:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.877 03:11:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:02.877 03:11:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:03.149 [2024-10-09 03:11:46.228907] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:04.102 03:11:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:04.102 03:11:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.102 03:11:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.102 [2024-10-09 03:11:47.146698] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:04.102 [2024-10-09 03:11:47.146792] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:04.102 [2024-10-09 03:11:47.147027] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005ee0 00:09:04.102 03:11:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.102 03:11:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:04.102 03:11:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:04.102 03:11:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:04.102 03:11:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:09:04.102 03:11:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:04.102 03:11:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:04.102 03:11:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:04.102 03:11:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:04.102 03:11:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:04.102 03:11:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:04.102 03:11:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.102 03:11:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.102 03:11:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.102 03:11:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.102 03:11:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.102 03:11:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:04.102 03:11:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.102 03:11:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.102 03:11:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.102 03:11:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.102 "name": "raid_bdev1", 00:09:04.102 "uuid": "dbda6473-dbc9-4fa8-892d-323c50afd25d", 00:09:04.102 "strip_size_kb": 0, 00:09:04.102 "state": "online", 00:09:04.102 "raid_level": "raid1", 00:09:04.102 "superblock": true, 00:09:04.102 "num_base_bdevs": 2, 00:09:04.102 "num_base_bdevs_discovered": 1, 00:09:04.102 "num_base_bdevs_operational": 1, 00:09:04.102 "base_bdevs_list": [ 00:09:04.102 { 00:09:04.102 "name": null, 00:09:04.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.102 "is_configured": false, 00:09:04.102 "data_offset": 0, 00:09:04.102 "data_size": 63488 00:09:04.102 }, 00:09:04.102 { 00:09:04.102 "name": "BaseBdev2", 00:09:04.102 "uuid": "d8846bf6-cb44-5676-8e5a-d27a2a721b49", 00:09:04.102 "is_configured": true, 00:09:04.102 "data_offset": 2048, 00:09:04.102 "data_size": 63488 00:09:04.102 } 00:09:04.102 ] 00:09:04.102 }' 00:09:04.102 03:11:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.102 03:11:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.362 03:11:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:04.362 03:11:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:04.362 03:11:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.362 [2024-10-09 03:11:47.531579] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:04.362 [2024-10-09 03:11:47.531722] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:04.362 [2024-10-09 03:11:47.534352] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:04.362 [2024-10-09 03:11:47.534442] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:04.362 [2024-10-09 03:11:47.534523] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:04.362 [2024-10-09 03:11:47.534567] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:04.362 { 00:09:04.362 "results": [ 00:09:04.362 { 00:09:04.362 "job": "raid_bdev1", 00:09:04.362 "core_mask": "0x1", 00:09:04.362 "workload": "randrw", 00:09:04.362 "percentage": 50, 00:09:04.362 "status": "finished", 00:09:04.362 "queue_depth": 1, 00:09:04.362 "io_size": 131072, 00:09:04.362 "runtime": 1.303189, 00:09:04.362 "iops": 17983.577209445444, 00:09:04.362 "mibps": 2247.9471511806805, 00:09:04.362 "io_failed": 0, 00:09:04.362 "io_timeout": 0, 00:09:04.362 "avg_latency_us": 53.03995748711906, 00:09:04.362 "min_latency_us": 22.581659388646287, 00:09:04.362 "max_latency_us": 1345.0620087336245 00:09:04.362 } 00:09:04.362 ], 00:09:04.362 "core_count": 1 00:09:04.362 } 00:09:04.363 03:11:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:04.363 03:11:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63781 00:09:04.363 03:11:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 63781 ']' 00:09:04.363 03:11:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 63781 00:09:04.363 03:11:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:04.363 03:11:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:04.363 03:11:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63781 00:09:04.363 03:11:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:04.363 03:11:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:04.363 03:11:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63781' 00:09:04.363 killing process with pid 63781 00:09:04.363 03:11:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 63781 00:09:04.363 [2024-10-09 03:11:47.573801] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:04.363 03:11:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 63781 00:09:04.623 [2024-10-09 03:11:47.721591] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:06.004 03:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.s39KN1lqHx 00:09:06.004 03:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:06.004 03:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:06.005 03:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:06.005 03:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:06.005 03:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:06.005 03:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:06.005 03:11:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:06.005 00:09:06.005 real 0m4.556s 00:09:06.005 user 0m5.205s 00:09:06.005 sys 0m0.645s 00:09:06.005 ************************************ 00:09:06.005 END TEST raid_write_error_test 00:09:06.005 ************************************ 00:09:06.005 03:11:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:06.005 03:11:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.005 03:11:49 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:06.005 03:11:49 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:06.005 03:11:49 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:09:06.005 03:11:49 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:06.005 03:11:49 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:06.005 03:11:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:06.005 ************************************ 00:09:06.005 START TEST raid_state_function_test 00:09:06.005 ************************************ 00:09:06.005 03:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 false 00:09:06.005 03:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:06.005 03:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:06.005 03:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:06.005 03:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:06.005 03:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:06.005 03:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:06.005 03:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:06.005 03:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:06.005 03:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:06.005 03:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:06.005 03:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:06.005 03:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:06.005 03:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:06.005 03:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:06.005 03:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:06.005 03:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:06.005 03:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:06.005 03:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:06.005 03:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:06.005 03:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:06.005 03:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:06.005 03:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:06.005 03:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:06.005 03:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:06.005 03:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:06.005 03:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:06.005 03:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63925 00:09:06.005 03:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:06.005 03:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63925' 00:09:06.005 Process raid pid: 63925 00:09:06.005 03:11:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63925 00:09:06.005 03:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 63925 ']' 00:09:06.005 03:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.005 03:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:06.005 03:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.005 03:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:06.005 03:11:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.265 [2024-10-09 03:11:49.338003] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:09:06.265 [2024-10-09 03:11:49.338209] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.265 [2024-10-09 03:11:49.489510] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.524 [2024-10-09 03:11:49.724316] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.783 [2024-10-09 03:11:49.951621] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:06.783 [2024-10-09 03:11:49.951774] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:07.043 03:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:07.043 03:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:07.043 03:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:07.043 03:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.044 03:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.044 [2024-10-09 03:11:50.168455] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:07.044 [2024-10-09 03:11:50.168600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:07.044 [2024-10-09 03:11:50.168638] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:07.044 [2024-10-09 03:11:50.168663] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:07.044 [2024-10-09 03:11:50.168695] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:07.044 [2024-10-09 03:11:50.168719] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:07.044 03:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.044 03:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:07.044 03:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.044 03:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.044 03:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.044 03:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.044 03:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.044 03:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.044 03:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.044 03:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.044 03:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.044 03:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.044 03:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.044 03:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.044 03:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.044 03:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.044 03:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.044 "name": "Existed_Raid", 00:09:07.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.044 "strip_size_kb": 64, 00:09:07.044 "state": "configuring", 00:09:07.044 "raid_level": "raid0", 00:09:07.044 "superblock": false, 00:09:07.044 "num_base_bdevs": 3, 00:09:07.044 "num_base_bdevs_discovered": 0, 00:09:07.044 "num_base_bdevs_operational": 3, 00:09:07.044 "base_bdevs_list": [ 00:09:07.044 { 00:09:07.044 "name": "BaseBdev1", 00:09:07.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.044 "is_configured": false, 00:09:07.044 "data_offset": 0, 00:09:07.044 "data_size": 0 00:09:07.044 }, 00:09:07.044 { 00:09:07.044 "name": "BaseBdev2", 00:09:07.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.044 "is_configured": false, 00:09:07.044 "data_offset": 0, 00:09:07.044 "data_size": 0 00:09:07.044 }, 00:09:07.044 { 00:09:07.044 "name": "BaseBdev3", 00:09:07.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.044 "is_configured": false, 00:09:07.044 "data_offset": 0, 00:09:07.044 "data_size": 0 00:09:07.044 } 00:09:07.044 ] 00:09:07.044 }' 00:09:07.044 03:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.044 03:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.613 [2024-10-09 03:11:50.659537] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:07.613 [2024-10-09 03:11:50.659592] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.613 [2024-10-09 03:11:50.671528] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:07.613 [2024-10-09 03:11:50.671619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:07.613 [2024-10-09 03:11:50.671651] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:07.613 [2024-10-09 03:11:50.671676] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:07.613 [2024-10-09 03:11:50.671701] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:07.613 [2024-10-09 03:11:50.671725] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.613 [2024-10-09 03:11:50.736496] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:07.613 BaseBdev1 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.613 [ 00:09:07.613 { 00:09:07.613 "name": "BaseBdev1", 00:09:07.613 "aliases": [ 00:09:07.613 "28899b96-35ae-4d47-b93f-55a846c3f5b3" 00:09:07.613 ], 00:09:07.613 "product_name": "Malloc disk", 00:09:07.613 "block_size": 512, 00:09:07.613 "num_blocks": 65536, 00:09:07.613 "uuid": "28899b96-35ae-4d47-b93f-55a846c3f5b3", 00:09:07.613 "assigned_rate_limits": { 00:09:07.613 "rw_ios_per_sec": 0, 00:09:07.613 "rw_mbytes_per_sec": 0, 00:09:07.613 "r_mbytes_per_sec": 0, 00:09:07.613 "w_mbytes_per_sec": 0 00:09:07.613 }, 00:09:07.613 "claimed": true, 00:09:07.613 "claim_type": "exclusive_write", 00:09:07.613 "zoned": false, 00:09:07.613 "supported_io_types": { 00:09:07.613 "read": true, 00:09:07.613 "write": true, 00:09:07.613 "unmap": true, 00:09:07.613 "flush": true, 00:09:07.613 "reset": true, 00:09:07.613 "nvme_admin": false, 00:09:07.613 "nvme_io": false, 00:09:07.613 "nvme_io_md": false, 00:09:07.613 "write_zeroes": true, 00:09:07.613 "zcopy": true, 00:09:07.613 "get_zone_info": false, 00:09:07.613 "zone_management": false, 00:09:07.613 "zone_append": false, 00:09:07.613 "compare": false, 00:09:07.613 "compare_and_write": false, 00:09:07.613 "abort": true, 00:09:07.613 "seek_hole": false, 00:09:07.613 "seek_data": false, 00:09:07.613 "copy": true, 00:09:07.613 "nvme_iov_md": false 00:09:07.613 }, 00:09:07.613 "memory_domains": [ 00:09:07.613 { 00:09:07.613 "dma_device_id": "system", 00:09:07.613 "dma_device_type": 1 00:09:07.613 }, 00:09:07.613 { 00:09:07.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:07.613 "dma_device_type": 2 00:09:07.613 } 00:09:07.613 ], 00:09:07.613 "driver_specific": {} 00:09:07.613 } 00:09:07.613 ] 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.613 "name": "Existed_Raid", 00:09:07.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.613 "strip_size_kb": 64, 00:09:07.613 "state": "configuring", 00:09:07.613 "raid_level": "raid0", 00:09:07.613 "superblock": false, 00:09:07.613 "num_base_bdevs": 3, 00:09:07.613 "num_base_bdevs_discovered": 1, 00:09:07.613 "num_base_bdevs_operational": 3, 00:09:07.613 "base_bdevs_list": [ 00:09:07.613 { 00:09:07.613 "name": "BaseBdev1", 00:09:07.613 "uuid": "28899b96-35ae-4d47-b93f-55a846c3f5b3", 00:09:07.613 "is_configured": true, 00:09:07.613 "data_offset": 0, 00:09:07.613 "data_size": 65536 00:09:07.613 }, 00:09:07.613 { 00:09:07.613 "name": "BaseBdev2", 00:09:07.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.613 "is_configured": false, 00:09:07.613 "data_offset": 0, 00:09:07.613 "data_size": 0 00:09:07.613 }, 00:09:07.613 { 00:09:07.613 "name": "BaseBdev3", 00:09:07.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.613 "is_configured": false, 00:09:07.613 "data_offset": 0, 00:09:07.613 "data_size": 0 00:09:07.613 } 00:09:07.613 ] 00:09:07.613 }' 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.613 03:11:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.184 03:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:08.184 03:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.184 03:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.184 [2024-10-09 03:11:51.211734] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:08.184 [2024-10-09 03:11:51.211898] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:08.184 03:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.184 03:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:08.184 03:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.184 03:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.184 [2024-10-09 03:11:51.223740] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:08.184 [2024-10-09 03:11:51.225952] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:08.184 [2024-10-09 03:11:51.226034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:08.184 [2024-10-09 03:11:51.226064] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:08.184 [2024-10-09 03:11:51.226086] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:08.184 03:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.184 03:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:08.184 03:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:08.184 03:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:08.184 03:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.184 03:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.184 03:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.184 03:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.184 03:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.184 03:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.184 03:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.184 03:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.184 03:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.184 03:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.185 03:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.185 03:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.185 03:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.185 03:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.185 03:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.185 "name": "Existed_Raid", 00:09:08.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.185 "strip_size_kb": 64, 00:09:08.185 "state": "configuring", 00:09:08.185 "raid_level": "raid0", 00:09:08.185 "superblock": false, 00:09:08.185 "num_base_bdevs": 3, 00:09:08.185 "num_base_bdevs_discovered": 1, 00:09:08.185 "num_base_bdevs_operational": 3, 00:09:08.185 "base_bdevs_list": [ 00:09:08.185 { 00:09:08.185 "name": "BaseBdev1", 00:09:08.185 "uuid": "28899b96-35ae-4d47-b93f-55a846c3f5b3", 00:09:08.185 "is_configured": true, 00:09:08.185 "data_offset": 0, 00:09:08.185 "data_size": 65536 00:09:08.185 }, 00:09:08.185 { 00:09:08.185 "name": "BaseBdev2", 00:09:08.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.185 "is_configured": false, 00:09:08.185 "data_offset": 0, 00:09:08.185 "data_size": 0 00:09:08.185 }, 00:09:08.185 { 00:09:08.185 "name": "BaseBdev3", 00:09:08.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.185 "is_configured": false, 00:09:08.185 "data_offset": 0, 00:09:08.185 "data_size": 0 00:09:08.185 } 00:09:08.185 ] 00:09:08.185 }' 00:09:08.185 03:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.185 03:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.445 03:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:08.445 03:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.445 03:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.445 [2024-10-09 03:11:51.734310] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:08.445 BaseBdev2 00:09:08.445 03:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.445 03:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:08.445 03:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:08.445 03:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:08.445 03:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:08.445 03:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:08.445 03:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:08.445 03:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:08.445 03:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.445 03:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.705 03:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.705 03:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:08.705 03:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.705 03:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.705 [ 00:09:08.705 { 00:09:08.705 "name": "BaseBdev2", 00:09:08.705 "aliases": [ 00:09:08.705 "5e106f0b-b2eb-4bb2-9cc7-2296eea68de8" 00:09:08.705 ], 00:09:08.705 "product_name": "Malloc disk", 00:09:08.705 "block_size": 512, 00:09:08.705 "num_blocks": 65536, 00:09:08.705 "uuid": "5e106f0b-b2eb-4bb2-9cc7-2296eea68de8", 00:09:08.705 "assigned_rate_limits": { 00:09:08.705 "rw_ios_per_sec": 0, 00:09:08.705 "rw_mbytes_per_sec": 0, 00:09:08.705 "r_mbytes_per_sec": 0, 00:09:08.705 "w_mbytes_per_sec": 0 00:09:08.705 }, 00:09:08.705 "claimed": true, 00:09:08.705 "claim_type": "exclusive_write", 00:09:08.705 "zoned": false, 00:09:08.705 "supported_io_types": { 00:09:08.705 "read": true, 00:09:08.705 "write": true, 00:09:08.705 "unmap": true, 00:09:08.705 "flush": true, 00:09:08.705 "reset": true, 00:09:08.705 "nvme_admin": false, 00:09:08.705 "nvme_io": false, 00:09:08.705 "nvme_io_md": false, 00:09:08.705 "write_zeroes": true, 00:09:08.705 "zcopy": true, 00:09:08.705 "get_zone_info": false, 00:09:08.705 "zone_management": false, 00:09:08.705 "zone_append": false, 00:09:08.705 "compare": false, 00:09:08.705 "compare_and_write": false, 00:09:08.705 "abort": true, 00:09:08.705 "seek_hole": false, 00:09:08.705 "seek_data": false, 00:09:08.705 "copy": true, 00:09:08.705 "nvme_iov_md": false 00:09:08.706 }, 00:09:08.706 "memory_domains": [ 00:09:08.706 { 00:09:08.706 "dma_device_id": "system", 00:09:08.706 "dma_device_type": 1 00:09:08.706 }, 00:09:08.706 { 00:09:08.706 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.706 "dma_device_type": 2 00:09:08.706 } 00:09:08.706 ], 00:09:08.706 "driver_specific": {} 00:09:08.706 } 00:09:08.706 ] 00:09:08.706 03:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.706 03:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:08.706 03:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:08.706 03:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:08.706 03:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:08.706 03:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.706 03:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.706 03:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:08.706 03:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.706 03:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:08.706 03:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.706 03:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.706 03:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.706 03:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.706 03:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.706 03:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.706 03:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.706 03:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.706 03:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.706 03:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.706 "name": "Existed_Raid", 00:09:08.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.706 "strip_size_kb": 64, 00:09:08.706 "state": "configuring", 00:09:08.706 "raid_level": "raid0", 00:09:08.706 "superblock": false, 00:09:08.706 "num_base_bdevs": 3, 00:09:08.706 "num_base_bdevs_discovered": 2, 00:09:08.706 "num_base_bdevs_operational": 3, 00:09:08.706 "base_bdevs_list": [ 00:09:08.706 { 00:09:08.706 "name": "BaseBdev1", 00:09:08.706 "uuid": "28899b96-35ae-4d47-b93f-55a846c3f5b3", 00:09:08.706 "is_configured": true, 00:09:08.706 "data_offset": 0, 00:09:08.706 "data_size": 65536 00:09:08.706 }, 00:09:08.706 { 00:09:08.706 "name": "BaseBdev2", 00:09:08.706 "uuid": "5e106f0b-b2eb-4bb2-9cc7-2296eea68de8", 00:09:08.706 "is_configured": true, 00:09:08.706 "data_offset": 0, 00:09:08.706 "data_size": 65536 00:09:08.706 }, 00:09:08.706 { 00:09:08.706 "name": "BaseBdev3", 00:09:08.706 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.706 "is_configured": false, 00:09:08.706 "data_offset": 0, 00:09:08.706 "data_size": 0 00:09:08.706 } 00:09:08.706 ] 00:09:08.706 }' 00:09:08.706 03:11:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.706 03:11:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.966 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:08.966 03:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.966 03:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:08.966 [2024-10-09 03:11:52.259883] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:08.966 [2024-10-09 03:11:52.260024] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:08.966 [2024-10-09 03:11:52.260051] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:08.966 [2024-10-09 03:11:52.260367] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:08.966 [2024-10-09 03:11:52.260552] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:08.966 [2024-10-09 03:11:52.260566] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:08.966 [2024-10-09 03:11:52.260836] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:08.966 BaseBdev3 00:09:08.966 03:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:08.966 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:08.966 03:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:08.966 03:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:08.966 03:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:08.966 03:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:08.966 03:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:08.966 03:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:08.966 03:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:08.966 03:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.226 03:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.226 03:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:09.226 03:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.226 03:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.226 [ 00:09:09.226 { 00:09:09.226 "name": "BaseBdev3", 00:09:09.226 "aliases": [ 00:09:09.226 "5bcdc724-e832-43a9-a0f5-a10793ef3e5a" 00:09:09.226 ], 00:09:09.226 "product_name": "Malloc disk", 00:09:09.226 "block_size": 512, 00:09:09.226 "num_blocks": 65536, 00:09:09.226 "uuid": "5bcdc724-e832-43a9-a0f5-a10793ef3e5a", 00:09:09.226 "assigned_rate_limits": { 00:09:09.226 "rw_ios_per_sec": 0, 00:09:09.226 "rw_mbytes_per_sec": 0, 00:09:09.226 "r_mbytes_per_sec": 0, 00:09:09.226 "w_mbytes_per_sec": 0 00:09:09.226 }, 00:09:09.226 "claimed": true, 00:09:09.226 "claim_type": "exclusive_write", 00:09:09.226 "zoned": false, 00:09:09.226 "supported_io_types": { 00:09:09.226 "read": true, 00:09:09.226 "write": true, 00:09:09.226 "unmap": true, 00:09:09.226 "flush": true, 00:09:09.226 "reset": true, 00:09:09.226 "nvme_admin": false, 00:09:09.226 "nvme_io": false, 00:09:09.226 "nvme_io_md": false, 00:09:09.226 "write_zeroes": true, 00:09:09.226 "zcopy": true, 00:09:09.226 "get_zone_info": false, 00:09:09.226 "zone_management": false, 00:09:09.226 "zone_append": false, 00:09:09.226 "compare": false, 00:09:09.226 "compare_and_write": false, 00:09:09.226 "abort": true, 00:09:09.226 "seek_hole": false, 00:09:09.226 "seek_data": false, 00:09:09.226 "copy": true, 00:09:09.226 "nvme_iov_md": false 00:09:09.226 }, 00:09:09.226 "memory_domains": [ 00:09:09.226 { 00:09:09.226 "dma_device_id": "system", 00:09:09.226 "dma_device_type": 1 00:09:09.226 }, 00:09:09.226 { 00:09:09.226 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.226 "dma_device_type": 2 00:09:09.226 } 00:09:09.226 ], 00:09:09.226 "driver_specific": {} 00:09:09.226 } 00:09:09.226 ] 00:09:09.226 03:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.226 03:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:09.226 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:09.226 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:09.226 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:09.226 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.226 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:09.226 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:09.226 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.226 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:09.226 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.226 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.226 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.226 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.226 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.226 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.226 03:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.226 03:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.226 03:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.226 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.226 "name": "Existed_Raid", 00:09:09.226 "uuid": "fe5dd1a5-59d2-41cd-ac7d-0b785c42ffdc", 00:09:09.226 "strip_size_kb": 64, 00:09:09.226 "state": "online", 00:09:09.226 "raid_level": "raid0", 00:09:09.226 "superblock": false, 00:09:09.226 "num_base_bdevs": 3, 00:09:09.226 "num_base_bdevs_discovered": 3, 00:09:09.226 "num_base_bdevs_operational": 3, 00:09:09.226 "base_bdevs_list": [ 00:09:09.226 { 00:09:09.226 "name": "BaseBdev1", 00:09:09.226 "uuid": "28899b96-35ae-4d47-b93f-55a846c3f5b3", 00:09:09.226 "is_configured": true, 00:09:09.226 "data_offset": 0, 00:09:09.226 "data_size": 65536 00:09:09.226 }, 00:09:09.226 { 00:09:09.226 "name": "BaseBdev2", 00:09:09.226 "uuid": "5e106f0b-b2eb-4bb2-9cc7-2296eea68de8", 00:09:09.226 "is_configured": true, 00:09:09.226 "data_offset": 0, 00:09:09.226 "data_size": 65536 00:09:09.226 }, 00:09:09.226 { 00:09:09.226 "name": "BaseBdev3", 00:09:09.226 "uuid": "5bcdc724-e832-43a9-a0f5-a10793ef3e5a", 00:09:09.226 "is_configured": true, 00:09:09.226 "data_offset": 0, 00:09:09.226 "data_size": 65536 00:09:09.226 } 00:09:09.226 ] 00:09:09.226 }' 00:09:09.226 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.226 03:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.484 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:09.484 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:09.484 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:09.484 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:09.484 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:09.484 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:09.484 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:09.484 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:09.484 03:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.484 03:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.484 [2024-10-09 03:11:52.783360] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:09.743 03:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.743 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:09.743 "name": "Existed_Raid", 00:09:09.743 "aliases": [ 00:09:09.743 "fe5dd1a5-59d2-41cd-ac7d-0b785c42ffdc" 00:09:09.743 ], 00:09:09.743 "product_name": "Raid Volume", 00:09:09.743 "block_size": 512, 00:09:09.743 "num_blocks": 196608, 00:09:09.743 "uuid": "fe5dd1a5-59d2-41cd-ac7d-0b785c42ffdc", 00:09:09.743 "assigned_rate_limits": { 00:09:09.743 "rw_ios_per_sec": 0, 00:09:09.743 "rw_mbytes_per_sec": 0, 00:09:09.743 "r_mbytes_per_sec": 0, 00:09:09.743 "w_mbytes_per_sec": 0 00:09:09.743 }, 00:09:09.743 "claimed": false, 00:09:09.743 "zoned": false, 00:09:09.744 "supported_io_types": { 00:09:09.744 "read": true, 00:09:09.744 "write": true, 00:09:09.744 "unmap": true, 00:09:09.744 "flush": true, 00:09:09.744 "reset": true, 00:09:09.744 "nvme_admin": false, 00:09:09.744 "nvme_io": false, 00:09:09.744 "nvme_io_md": false, 00:09:09.744 "write_zeroes": true, 00:09:09.744 "zcopy": false, 00:09:09.744 "get_zone_info": false, 00:09:09.744 "zone_management": false, 00:09:09.744 "zone_append": false, 00:09:09.744 "compare": false, 00:09:09.744 "compare_and_write": false, 00:09:09.744 "abort": false, 00:09:09.744 "seek_hole": false, 00:09:09.744 "seek_data": false, 00:09:09.744 "copy": false, 00:09:09.744 "nvme_iov_md": false 00:09:09.744 }, 00:09:09.744 "memory_domains": [ 00:09:09.744 { 00:09:09.744 "dma_device_id": "system", 00:09:09.744 "dma_device_type": 1 00:09:09.744 }, 00:09:09.744 { 00:09:09.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.744 "dma_device_type": 2 00:09:09.744 }, 00:09:09.744 { 00:09:09.744 "dma_device_id": "system", 00:09:09.744 "dma_device_type": 1 00:09:09.744 }, 00:09:09.744 { 00:09:09.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.744 "dma_device_type": 2 00:09:09.744 }, 00:09:09.744 { 00:09:09.744 "dma_device_id": "system", 00:09:09.744 "dma_device_type": 1 00:09:09.744 }, 00:09:09.744 { 00:09:09.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.744 "dma_device_type": 2 00:09:09.744 } 00:09:09.744 ], 00:09:09.744 "driver_specific": { 00:09:09.744 "raid": { 00:09:09.744 "uuid": "fe5dd1a5-59d2-41cd-ac7d-0b785c42ffdc", 00:09:09.744 "strip_size_kb": 64, 00:09:09.744 "state": "online", 00:09:09.744 "raid_level": "raid0", 00:09:09.744 "superblock": false, 00:09:09.744 "num_base_bdevs": 3, 00:09:09.744 "num_base_bdevs_discovered": 3, 00:09:09.744 "num_base_bdevs_operational": 3, 00:09:09.744 "base_bdevs_list": [ 00:09:09.744 { 00:09:09.744 "name": "BaseBdev1", 00:09:09.744 "uuid": "28899b96-35ae-4d47-b93f-55a846c3f5b3", 00:09:09.744 "is_configured": true, 00:09:09.744 "data_offset": 0, 00:09:09.744 "data_size": 65536 00:09:09.744 }, 00:09:09.744 { 00:09:09.744 "name": "BaseBdev2", 00:09:09.744 "uuid": "5e106f0b-b2eb-4bb2-9cc7-2296eea68de8", 00:09:09.744 "is_configured": true, 00:09:09.744 "data_offset": 0, 00:09:09.744 "data_size": 65536 00:09:09.744 }, 00:09:09.744 { 00:09:09.744 "name": "BaseBdev3", 00:09:09.744 "uuid": "5bcdc724-e832-43a9-a0f5-a10793ef3e5a", 00:09:09.744 "is_configured": true, 00:09:09.744 "data_offset": 0, 00:09:09.744 "data_size": 65536 00:09:09.744 } 00:09:09.744 ] 00:09:09.744 } 00:09:09.744 } 00:09:09.744 }' 00:09:09.744 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:09.744 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:09.744 BaseBdev2 00:09:09.744 BaseBdev3' 00:09:09.744 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.744 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:09.744 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.744 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:09.744 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.744 03:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.744 03:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.744 03:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.744 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.744 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.744 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.744 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.744 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:09.744 03:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.744 03:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.744 03:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.744 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.744 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.744 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.744 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:09.744 03:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.744 03:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.744 03:11:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.744 03:11:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:09.744 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:09.744 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:09.744 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:09.744 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:09.744 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.744 [2024-10-09 03:11:53.026636] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:09.744 [2024-10-09 03:11:53.026674] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:09.744 [2024-10-09 03:11:53.026735] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:10.004 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.004 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:10.004 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:10.004 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:10.004 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:10.004 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:10.004 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:10.004 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.004 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:10.004 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:10.004 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.004 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:10.004 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.004 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.004 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.004 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.004 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.004 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.004 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.004 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.004 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.004 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.004 "name": "Existed_Raid", 00:09:10.004 "uuid": "fe5dd1a5-59d2-41cd-ac7d-0b785c42ffdc", 00:09:10.004 "strip_size_kb": 64, 00:09:10.004 "state": "offline", 00:09:10.004 "raid_level": "raid0", 00:09:10.004 "superblock": false, 00:09:10.004 "num_base_bdevs": 3, 00:09:10.004 "num_base_bdevs_discovered": 2, 00:09:10.004 "num_base_bdevs_operational": 2, 00:09:10.004 "base_bdevs_list": [ 00:09:10.004 { 00:09:10.004 "name": null, 00:09:10.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.004 "is_configured": false, 00:09:10.004 "data_offset": 0, 00:09:10.004 "data_size": 65536 00:09:10.004 }, 00:09:10.004 { 00:09:10.004 "name": "BaseBdev2", 00:09:10.004 "uuid": "5e106f0b-b2eb-4bb2-9cc7-2296eea68de8", 00:09:10.004 "is_configured": true, 00:09:10.004 "data_offset": 0, 00:09:10.004 "data_size": 65536 00:09:10.004 }, 00:09:10.004 { 00:09:10.004 "name": "BaseBdev3", 00:09:10.004 "uuid": "5bcdc724-e832-43a9-a0f5-a10793ef3e5a", 00:09:10.004 "is_configured": true, 00:09:10.004 "data_offset": 0, 00:09:10.004 "data_size": 65536 00:09:10.004 } 00:09:10.004 ] 00:09:10.004 }' 00:09:10.004 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.004 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.263 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:10.263 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:10.263 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.263 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.263 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.263 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:10.263 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.523 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:10.523 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:10.523 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:10.523 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.523 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.523 [2024-10-09 03:11:53.582731] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:10.523 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.523 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:10.523 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:10.523 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.523 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:10.523 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.523 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.523 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.523 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:10.523 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:10.523 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:10.523 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.523 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.523 [2024-10-09 03:11:53.748514] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:10.523 [2024-10-09 03:11:53.748583] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:10.783 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.783 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:10.783 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:10.783 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.783 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.783 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:10.783 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.783 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.783 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:10.783 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:10.783 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:10.783 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:10.783 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:10.783 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:10.783 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.783 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.783 BaseBdev2 00:09:10.783 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.783 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:10.783 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:10.783 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:10.783 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:10.783 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:10.783 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:10.783 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:10.783 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.783 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.783 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.783 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:10.783 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.783 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.783 [ 00:09:10.783 { 00:09:10.783 "name": "BaseBdev2", 00:09:10.783 "aliases": [ 00:09:10.783 "d07e6965-6a79-4377-8e2a-344973d1fdee" 00:09:10.783 ], 00:09:10.783 "product_name": "Malloc disk", 00:09:10.783 "block_size": 512, 00:09:10.783 "num_blocks": 65536, 00:09:10.783 "uuid": "d07e6965-6a79-4377-8e2a-344973d1fdee", 00:09:10.783 "assigned_rate_limits": { 00:09:10.783 "rw_ios_per_sec": 0, 00:09:10.783 "rw_mbytes_per_sec": 0, 00:09:10.783 "r_mbytes_per_sec": 0, 00:09:10.783 "w_mbytes_per_sec": 0 00:09:10.783 }, 00:09:10.783 "claimed": false, 00:09:10.783 "zoned": false, 00:09:10.783 "supported_io_types": { 00:09:10.783 "read": true, 00:09:10.783 "write": true, 00:09:10.783 "unmap": true, 00:09:10.783 "flush": true, 00:09:10.783 "reset": true, 00:09:10.783 "nvme_admin": false, 00:09:10.783 "nvme_io": false, 00:09:10.783 "nvme_io_md": false, 00:09:10.783 "write_zeroes": true, 00:09:10.783 "zcopy": true, 00:09:10.783 "get_zone_info": false, 00:09:10.783 "zone_management": false, 00:09:10.783 "zone_append": false, 00:09:10.783 "compare": false, 00:09:10.783 "compare_and_write": false, 00:09:10.783 "abort": true, 00:09:10.783 "seek_hole": false, 00:09:10.783 "seek_data": false, 00:09:10.783 "copy": true, 00:09:10.783 "nvme_iov_md": false 00:09:10.783 }, 00:09:10.783 "memory_domains": [ 00:09:10.783 { 00:09:10.783 "dma_device_id": "system", 00:09:10.783 "dma_device_type": 1 00:09:10.783 }, 00:09:10.783 { 00:09:10.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.783 "dma_device_type": 2 00:09:10.783 } 00:09:10.783 ], 00:09:10.783 "driver_specific": {} 00:09:10.783 } 00:09:10.783 ] 00:09:10.783 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.783 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:10.783 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:10.783 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:10.783 03:11:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:10.784 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.784 03:11:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.784 BaseBdev3 00:09:10.784 03:11:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.784 03:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:10.784 03:11:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:10.784 03:11:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:10.784 03:11:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:10.784 03:11:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:10.784 03:11:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:10.784 03:11:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:10.784 03:11:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.784 03:11:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.784 03:11:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.784 03:11:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:10.784 03:11:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.784 03:11:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.784 [ 00:09:10.784 { 00:09:10.784 "name": "BaseBdev3", 00:09:10.784 "aliases": [ 00:09:10.784 "ecb889a2-cdf2-46a8-b8db-71c1496e2179" 00:09:10.784 ], 00:09:10.784 "product_name": "Malloc disk", 00:09:10.784 "block_size": 512, 00:09:10.784 "num_blocks": 65536, 00:09:10.784 "uuid": "ecb889a2-cdf2-46a8-b8db-71c1496e2179", 00:09:10.784 "assigned_rate_limits": { 00:09:10.784 "rw_ios_per_sec": 0, 00:09:10.784 "rw_mbytes_per_sec": 0, 00:09:10.784 "r_mbytes_per_sec": 0, 00:09:10.784 "w_mbytes_per_sec": 0 00:09:10.784 }, 00:09:10.784 "claimed": false, 00:09:10.784 "zoned": false, 00:09:10.784 "supported_io_types": { 00:09:10.784 "read": true, 00:09:10.784 "write": true, 00:09:10.784 "unmap": true, 00:09:10.784 "flush": true, 00:09:10.784 "reset": true, 00:09:10.784 "nvme_admin": false, 00:09:10.784 "nvme_io": false, 00:09:10.784 "nvme_io_md": false, 00:09:10.784 "write_zeroes": true, 00:09:10.784 "zcopy": true, 00:09:10.784 "get_zone_info": false, 00:09:10.784 "zone_management": false, 00:09:10.784 "zone_append": false, 00:09:10.784 "compare": false, 00:09:10.784 "compare_and_write": false, 00:09:10.784 "abort": true, 00:09:10.784 "seek_hole": false, 00:09:10.784 "seek_data": false, 00:09:10.784 "copy": true, 00:09:10.784 "nvme_iov_md": false 00:09:10.784 }, 00:09:10.784 "memory_domains": [ 00:09:10.784 { 00:09:10.784 "dma_device_id": "system", 00:09:10.784 "dma_device_type": 1 00:09:10.784 }, 00:09:10.784 { 00:09:10.784 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:10.784 "dma_device_type": 2 00:09:10.784 } 00:09:10.784 ], 00:09:10.784 "driver_specific": {} 00:09:10.784 } 00:09:10.784 ] 00:09:10.784 03:11:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:10.784 03:11:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:10.784 03:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:10.784 03:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:10.784 03:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:10.784 03:11:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:10.784 03:11:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:10.784 [2024-10-09 03:11:54.081997] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:10.784 [2024-10-09 03:11:54.082054] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:10.784 [2024-10-09 03:11:54.082078] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:10.784 [2024-10-09 03:11:54.084121] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:11.044 03:11:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.044 03:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:11.044 03:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.044 03:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.044 03:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.044 03:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.044 03:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.044 03:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.044 03:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.044 03:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.044 03:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.044 03:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.044 03:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.044 03:11:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.044 03:11:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.044 03:11:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.044 03:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.044 "name": "Existed_Raid", 00:09:11.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.044 "strip_size_kb": 64, 00:09:11.044 "state": "configuring", 00:09:11.044 "raid_level": "raid0", 00:09:11.044 "superblock": false, 00:09:11.044 "num_base_bdevs": 3, 00:09:11.044 "num_base_bdevs_discovered": 2, 00:09:11.044 "num_base_bdevs_operational": 3, 00:09:11.044 "base_bdevs_list": [ 00:09:11.044 { 00:09:11.044 "name": "BaseBdev1", 00:09:11.044 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.044 "is_configured": false, 00:09:11.044 "data_offset": 0, 00:09:11.044 "data_size": 0 00:09:11.044 }, 00:09:11.044 { 00:09:11.044 "name": "BaseBdev2", 00:09:11.044 "uuid": "d07e6965-6a79-4377-8e2a-344973d1fdee", 00:09:11.044 "is_configured": true, 00:09:11.044 "data_offset": 0, 00:09:11.044 "data_size": 65536 00:09:11.044 }, 00:09:11.044 { 00:09:11.044 "name": "BaseBdev3", 00:09:11.044 "uuid": "ecb889a2-cdf2-46a8-b8db-71c1496e2179", 00:09:11.044 "is_configured": true, 00:09:11.044 "data_offset": 0, 00:09:11.044 "data_size": 65536 00:09:11.044 } 00:09:11.044 ] 00:09:11.044 }' 00:09:11.044 03:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.044 03:11:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.304 03:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:11.304 03:11:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.304 03:11:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.304 [2024-10-09 03:11:54.573143] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:11.304 03:11:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.304 03:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:11.304 03:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:11.304 03:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:11.304 03:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:11.304 03:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:11.304 03:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:11.304 03:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:11.304 03:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:11.304 03:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:11.304 03:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:11.304 03:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.304 03:11:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.304 03:11:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.304 03:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:11.304 03:11:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.565 03:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:11.565 "name": "Existed_Raid", 00:09:11.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.565 "strip_size_kb": 64, 00:09:11.565 "state": "configuring", 00:09:11.565 "raid_level": "raid0", 00:09:11.565 "superblock": false, 00:09:11.565 "num_base_bdevs": 3, 00:09:11.565 "num_base_bdevs_discovered": 1, 00:09:11.565 "num_base_bdevs_operational": 3, 00:09:11.565 "base_bdevs_list": [ 00:09:11.565 { 00:09:11.565 "name": "BaseBdev1", 00:09:11.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.565 "is_configured": false, 00:09:11.565 "data_offset": 0, 00:09:11.565 "data_size": 0 00:09:11.565 }, 00:09:11.565 { 00:09:11.565 "name": null, 00:09:11.565 "uuid": "d07e6965-6a79-4377-8e2a-344973d1fdee", 00:09:11.565 "is_configured": false, 00:09:11.565 "data_offset": 0, 00:09:11.565 "data_size": 65536 00:09:11.565 }, 00:09:11.565 { 00:09:11.565 "name": "BaseBdev3", 00:09:11.565 "uuid": "ecb889a2-cdf2-46a8-b8db-71c1496e2179", 00:09:11.565 "is_configured": true, 00:09:11.565 "data_offset": 0, 00:09:11.565 "data_size": 65536 00:09:11.565 } 00:09:11.565 ] 00:09:11.565 }' 00:09:11.565 03:11:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:11.565 03:11:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.825 03:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:11.825 03:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.825 03:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.825 03:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:11.825 03:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:11.825 03:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:11.825 03:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:11.825 03:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:11.825 03:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.085 [2024-10-09 03:11:55.139568] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:12.085 BaseBdev1 00:09:12.085 03:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.085 03:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:12.085 03:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:12.085 03:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:12.085 03:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:12.085 03:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:12.085 03:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:12.085 03:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:12.085 03:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.085 03:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.085 03:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.085 03:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:12.085 03:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.085 03:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.085 [ 00:09:12.085 { 00:09:12.085 "name": "BaseBdev1", 00:09:12.085 "aliases": [ 00:09:12.085 "12cc43e3-2b0d-4139-8a3b-79edd97d4ccf" 00:09:12.085 ], 00:09:12.085 "product_name": "Malloc disk", 00:09:12.085 "block_size": 512, 00:09:12.085 "num_blocks": 65536, 00:09:12.085 "uuid": "12cc43e3-2b0d-4139-8a3b-79edd97d4ccf", 00:09:12.085 "assigned_rate_limits": { 00:09:12.085 "rw_ios_per_sec": 0, 00:09:12.085 "rw_mbytes_per_sec": 0, 00:09:12.085 "r_mbytes_per_sec": 0, 00:09:12.085 "w_mbytes_per_sec": 0 00:09:12.085 }, 00:09:12.085 "claimed": true, 00:09:12.085 "claim_type": "exclusive_write", 00:09:12.085 "zoned": false, 00:09:12.085 "supported_io_types": { 00:09:12.085 "read": true, 00:09:12.085 "write": true, 00:09:12.085 "unmap": true, 00:09:12.085 "flush": true, 00:09:12.085 "reset": true, 00:09:12.085 "nvme_admin": false, 00:09:12.085 "nvme_io": false, 00:09:12.085 "nvme_io_md": false, 00:09:12.085 "write_zeroes": true, 00:09:12.085 "zcopy": true, 00:09:12.085 "get_zone_info": false, 00:09:12.085 "zone_management": false, 00:09:12.085 "zone_append": false, 00:09:12.085 "compare": false, 00:09:12.085 "compare_and_write": false, 00:09:12.085 "abort": true, 00:09:12.085 "seek_hole": false, 00:09:12.085 "seek_data": false, 00:09:12.085 "copy": true, 00:09:12.085 "nvme_iov_md": false 00:09:12.085 }, 00:09:12.085 "memory_domains": [ 00:09:12.085 { 00:09:12.085 "dma_device_id": "system", 00:09:12.085 "dma_device_type": 1 00:09:12.085 }, 00:09:12.085 { 00:09:12.085 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:12.085 "dma_device_type": 2 00:09:12.085 } 00:09:12.085 ], 00:09:12.085 "driver_specific": {} 00:09:12.085 } 00:09:12.085 ] 00:09:12.085 03:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.085 03:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:12.085 03:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:12.085 03:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.085 03:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.085 03:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:12.085 03:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.085 03:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.085 03:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.085 03:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.085 03:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.085 03:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.085 03:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.085 03:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.085 03:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.085 03:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.085 03:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.085 03:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.085 "name": "Existed_Raid", 00:09:12.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.085 "strip_size_kb": 64, 00:09:12.085 "state": "configuring", 00:09:12.085 "raid_level": "raid0", 00:09:12.085 "superblock": false, 00:09:12.085 "num_base_bdevs": 3, 00:09:12.085 "num_base_bdevs_discovered": 2, 00:09:12.085 "num_base_bdevs_operational": 3, 00:09:12.085 "base_bdevs_list": [ 00:09:12.085 { 00:09:12.086 "name": "BaseBdev1", 00:09:12.086 "uuid": "12cc43e3-2b0d-4139-8a3b-79edd97d4ccf", 00:09:12.086 "is_configured": true, 00:09:12.086 "data_offset": 0, 00:09:12.086 "data_size": 65536 00:09:12.086 }, 00:09:12.086 { 00:09:12.086 "name": null, 00:09:12.086 "uuid": "d07e6965-6a79-4377-8e2a-344973d1fdee", 00:09:12.086 "is_configured": false, 00:09:12.086 "data_offset": 0, 00:09:12.086 "data_size": 65536 00:09:12.086 }, 00:09:12.086 { 00:09:12.086 "name": "BaseBdev3", 00:09:12.086 "uuid": "ecb889a2-cdf2-46a8-b8db-71c1496e2179", 00:09:12.086 "is_configured": true, 00:09:12.086 "data_offset": 0, 00:09:12.086 "data_size": 65536 00:09:12.086 } 00:09:12.086 ] 00:09:12.086 }' 00:09:12.086 03:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.086 03:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.346 03:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.346 03:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:12.346 03:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.346 03:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.346 03:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.346 03:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:12.346 03:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:12.346 03:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.346 03:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.346 [2024-10-09 03:11:55.598970] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:12.346 03:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.346 03:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:12.346 03:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.346 03:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.346 03:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:12.346 03:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.346 03:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.346 03:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.346 03:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.346 03:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.346 03:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.346 03:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.346 03:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.346 03:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.346 03:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.346 03:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.605 03:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.605 "name": "Existed_Raid", 00:09:12.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.605 "strip_size_kb": 64, 00:09:12.605 "state": "configuring", 00:09:12.605 "raid_level": "raid0", 00:09:12.605 "superblock": false, 00:09:12.605 "num_base_bdevs": 3, 00:09:12.605 "num_base_bdevs_discovered": 1, 00:09:12.605 "num_base_bdevs_operational": 3, 00:09:12.605 "base_bdevs_list": [ 00:09:12.605 { 00:09:12.605 "name": "BaseBdev1", 00:09:12.605 "uuid": "12cc43e3-2b0d-4139-8a3b-79edd97d4ccf", 00:09:12.605 "is_configured": true, 00:09:12.605 "data_offset": 0, 00:09:12.605 "data_size": 65536 00:09:12.605 }, 00:09:12.605 { 00:09:12.605 "name": null, 00:09:12.605 "uuid": "d07e6965-6a79-4377-8e2a-344973d1fdee", 00:09:12.605 "is_configured": false, 00:09:12.605 "data_offset": 0, 00:09:12.605 "data_size": 65536 00:09:12.605 }, 00:09:12.605 { 00:09:12.605 "name": null, 00:09:12.605 "uuid": "ecb889a2-cdf2-46a8-b8db-71c1496e2179", 00:09:12.605 "is_configured": false, 00:09:12.605 "data_offset": 0, 00:09:12.605 "data_size": 65536 00:09:12.605 } 00:09:12.605 ] 00:09:12.605 }' 00:09:12.605 03:11:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.605 03:11:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.864 03:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.864 03:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:12.864 03:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.864 03:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.864 03:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.864 03:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:12.864 03:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:12.864 03:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.864 03:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.864 [2024-10-09 03:11:56.070415] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:12.864 03:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.864 03:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:12.864 03:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:12.864 03:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:12.864 03:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:12.864 03:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:12.864 03:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:12.864 03:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:12.864 03:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:12.864 03:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:12.864 03:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:12.864 03:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:12.864 03:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.864 03:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.864 03:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:12.864 03:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.864 03:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:12.864 "name": "Existed_Raid", 00:09:12.864 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:12.864 "strip_size_kb": 64, 00:09:12.864 "state": "configuring", 00:09:12.864 "raid_level": "raid0", 00:09:12.864 "superblock": false, 00:09:12.864 "num_base_bdevs": 3, 00:09:12.864 "num_base_bdevs_discovered": 2, 00:09:12.864 "num_base_bdevs_operational": 3, 00:09:12.864 "base_bdevs_list": [ 00:09:12.864 { 00:09:12.864 "name": "BaseBdev1", 00:09:12.864 "uuid": "12cc43e3-2b0d-4139-8a3b-79edd97d4ccf", 00:09:12.864 "is_configured": true, 00:09:12.864 "data_offset": 0, 00:09:12.864 "data_size": 65536 00:09:12.864 }, 00:09:12.864 { 00:09:12.864 "name": null, 00:09:12.864 "uuid": "d07e6965-6a79-4377-8e2a-344973d1fdee", 00:09:12.864 "is_configured": false, 00:09:12.864 "data_offset": 0, 00:09:12.864 "data_size": 65536 00:09:12.864 }, 00:09:12.864 { 00:09:12.864 "name": "BaseBdev3", 00:09:12.864 "uuid": "ecb889a2-cdf2-46a8-b8db-71c1496e2179", 00:09:12.864 "is_configured": true, 00:09:12.864 "data_offset": 0, 00:09:12.864 "data_size": 65536 00:09:12.864 } 00:09:12.864 ] 00:09:12.864 }' 00:09:12.864 03:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:12.864 03:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.433 03:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.433 03:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.433 03:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.433 03:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:13.433 03:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.433 03:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:13.433 03:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:13.433 03:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.433 03:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.433 [2024-10-09 03:11:56.545704] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:13.433 03:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.433 03:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:13.433 03:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:13.433 03:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:13.433 03:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:13.433 03:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.433 03:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:13.433 03:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.433 03:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.433 03:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.433 03:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.433 03:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.433 03:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:13.433 03:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.433 03:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.433 03:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:13.433 03:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.433 "name": "Existed_Raid", 00:09:13.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.433 "strip_size_kb": 64, 00:09:13.433 "state": "configuring", 00:09:13.433 "raid_level": "raid0", 00:09:13.433 "superblock": false, 00:09:13.433 "num_base_bdevs": 3, 00:09:13.433 "num_base_bdevs_discovered": 1, 00:09:13.433 "num_base_bdevs_operational": 3, 00:09:13.433 "base_bdevs_list": [ 00:09:13.433 { 00:09:13.433 "name": null, 00:09:13.433 "uuid": "12cc43e3-2b0d-4139-8a3b-79edd97d4ccf", 00:09:13.433 "is_configured": false, 00:09:13.433 "data_offset": 0, 00:09:13.433 "data_size": 65536 00:09:13.433 }, 00:09:13.433 { 00:09:13.433 "name": null, 00:09:13.433 "uuid": "d07e6965-6a79-4377-8e2a-344973d1fdee", 00:09:13.433 "is_configured": false, 00:09:13.433 "data_offset": 0, 00:09:13.433 "data_size": 65536 00:09:13.433 }, 00:09:13.433 { 00:09:13.433 "name": "BaseBdev3", 00:09:13.433 "uuid": "ecb889a2-cdf2-46a8-b8db-71c1496e2179", 00:09:13.433 "is_configured": true, 00:09:13.433 "data_offset": 0, 00:09:13.433 "data_size": 65536 00:09:13.434 } 00:09:13.434 ] 00:09:13.434 }' 00:09:13.434 03:11:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.434 03:11:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.003 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:14.003 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.003 03:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.003 03:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.003 03:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.003 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:14.003 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:14.003 03:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.003 03:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.003 [2024-10-09 03:11:57.120788] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:14.003 03:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.003 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:14.003 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.003 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.003 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.003 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.003 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.003 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.003 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.003 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.003 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.003 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.003 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.003 03:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.003 03:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.003 03:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.003 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.003 "name": "Existed_Raid", 00:09:14.003 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:14.003 "strip_size_kb": 64, 00:09:14.003 "state": "configuring", 00:09:14.003 "raid_level": "raid0", 00:09:14.003 "superblock": false, 00:09:14.003 "num_base_bdevs": 3, 00:09:14.003 "num_base_bdevs_discovered": 2, 00:09:14.003 "num_base_bdevs_operational": 3, 00:09:14.003 "base_bdevs_list": [ 00:09:14.003 { 00:09:14.003 "name": null, 00:09:14.003 "uuid": "12cc43e3-2b0d-4139-8a3b-79edd97d4ccf", 00:09:14.003 "is_configured": false, 00:09:14.003 "data_offset": 0, 00:09:14.003 "data_size": 65536 00:09:14.003 }, 00:09:14.003 { 00:09:14.003 "name": "BaseBdev2", 00:09:14.003 "uuid": "d07e6965-6a79-4377-8e2a-344973d1fdee", 00:09:14.003 "is_configured": true, 00:09:14.003 "data_offset": 0, 00:09:14.003 "data_size": 65536 00:09:14.003 }, 00:09:14.003 { 00:09:14.003 "name": "BaseBdev3", 00:09:14.003 "uuid": "ecb889a2-cdf2-46a8-b8db-71c1496e2179", 00:09:14.003 "is_configured": true, 00:09:14.003 "data_offset": 0, 00:09:14.003 "data_size": 65536 00:09:14.003 } 00:09:14.003 ] 00:09:14.003 }' 00:09:14.003 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.003 03:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.263 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:14.263 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.263 03:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.263 03:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 12cc43e3-2b0d-4139-8a3b-79edd97d4ccf 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.524 [2024-10-09 03:11:57.678258] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:14.524 [2024-10-09 03:11:57.678383] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:14.524 [2024-10-09 03:11:57.678413] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:14.524 [2024-10-09 03:11:57.678719] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:14.524 [2024-10-09 03:11:57.678952] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:14.524 [2024-10-09 03:11:57.678995] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:14.524 [2024-10-09 03:11:57.679327] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:14.524 NewBaseBdev 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.524 [ 00:09:14.524 { 00:09:14.524 "name": "NewBaseBdev", 00:09:14.524 "aliases": [ 00:09:14.524 "12cc43e3-2b0d-4139-8a3b-79edd97d4ccf" 00:09:14.524 ], 00:09:14.524 "product_name": "Malloc disk", 00:09:14.524 "block_size": 512, 00:09:14.524 "num_blocks": 65536, 00:09:14.524 "uuid": "12cc43e3-2b0d-4139-8a3b-79edd97d4ccf", 00:09:14.524 "assigned_rate_limits": { 00:09:14.524 "rw_ios_per_sec": 0, 00:09:14.524 "rw_mbytes_per_sec": 0, 00:09:14.524 "r_mbytes_per_sec": 0, 00:09:14.524 "w_mbytes_per_sec": 0 00:09:14.524 }, 00:09:14.524 "claimed": true, 00:09:14.524 "claim_type": "exclusive_write", 00:09:14.524 "zoned": false, 00:09:14.524 "supported_io_types": { 00:09:14.524 "read": true, 00:09:14.524 "write": true, 00:09:14.524 "unmap": true, 00:09:14.524 "flush": true, 00:09:14.524 "reset": true, 00:09:14.524 "nvme_admin": false, 00:09:14.524 "nvme_io": false, 00:09:14.524 "nvme_io_md": false, 00:09:14.524 "write_zeroes": true, 00:09:14.524 "zcopy": true, 00:09:14.524 "get_zone_info": false, 00:09:14.524 "zone_management": false, 00:09:14.524 "zone_append": false, 00:09:14.524 "compare": false, 00:09:14.524 "compare_and_write": false, 00:09:14.524 "abort": true, 00:09:14.524 "seek_hole": false, 00:09:14.524 "seek_data": false, 00:09:14.524 "copy": true, 00:09:14.524 "nvme_iov_md": false 00:09:14.524 }, 00:09:14.524 "memory_domains": [ 00:09:14.524 { 00:09:14.524 "dma_device_id": "system", 00:09:14.524 "dma_device_type": 1 00:09:14.524 }, 00:09:14.524 { 00:09:14.524 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.524 "dma_device_type": 2 00:09:14.524 } 00:09:14.524 ], 00:09:14.524 "driver_specific": {} 00:09:14.524 } 00:09:14.524 ] 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:14.524 03:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:14.525 03:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.525 03:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:14.525 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.525 "name": "Existed_Raid", 00:09:14.525 "uuid": "987caf94-664d-4a04-9fa9-23a686260713", 00:09:14.525 "strip_size_kb": 64, 00:09:14.525 "state": "online", 00:09:14.525 "raid_level": "raid0", 00:09:14.525 "superblock": false, 00:09:14.525 "num_base_bdevs": 3, 00:09:14.525 "num_base_bdevs_discovered": 3, 00:09:14.525 "num_base_bdevs_operational": 3, 00:09:14.525 "base_bdevs_list": [ 00:09:14.525 { 00:09:14.525 "name": "NewBaseBdev", 00:09:14.525 "uuid": "12cc43e3-2b0d-4139-8a3b-79edd97d4ccf", 00:09:14.525 "is_configured": true, 00:09:14.525 "data_offset": 0, 00:09:14.525 "data_size": 65536 00:09:14.525 }, 00:09:14.525 { 00:09:14.525 "name": "BaseBdev2", 00:09:14.525 "uuid": "d07e6965-6a79-4377-8e2a-344973d1fdee", 00:09:14.525 "is_configured": true, 00:09:14.525 "data_offset": 0, 00:09:14.525 "data_size": 65536 00:09:14.525 }, 00:09:14.525 { 00:09:14.525 "name": "BaseBdev3", 00:09:14.525 "uuid": "ecb889a2-cdf2-46a8-b8db-71c1496e2179", 00:09:14.525 "is_configured": true, 00:09:14.525 "data_offset": 0, 00:09:14.525 "data_size": 65536 00:09:14.525 } 00:09:14.525 ] 00:09:14.525 }' 00:09:14.525 03:11:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.525 03:11:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.095 03:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:15.095 03:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:15.095 03:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:15.095 03:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:15.095 03:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:15.095 03:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:15.095 03:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:15.095 03:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.095 03:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:15.095 03:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.095 [2024-10-09 03:11:58.153772] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:15.095 03:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.095 03:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:15.095 "name": "Existed_Raid", 00:09:15.095 "aliases": [ 00:09:15.095 "987caf94-664d-4a04-9fa9-23a686260713" 00:09:15.095 ], 00:09:15.095 "product_name": "Raid Volume", 00:09:15.095 "block_size": 512, 00:09:15.095 "num_blocks": 196608, 00:09:15.095 "uuid": "987caf94-664d-4a04-9fa9-23a686260713", 00:09:15.095 "assigned_rate_limits": { 00:09:15.095 "rw_ios_per_sec": 0, 00:09:15.095 "rw_mbytes_per_sec": 0, 00:09:15.095 "r_mbytes_per_sec": 0, 00:09:15.096 "w_mbytes_per_sec": 0 00:09:15.096 }, 00:09:15.096 "claimed": false, 00:09:15.096 "zoned": false, 00:09:15.096 "supported_io_types": { 00:09:15.096 "read": true, 00:09:15.096 "write": true, 00:09:15.096 "unmap": true, 00:09:15.096 "flush": true, 00:09:15.096 "reset": true, 00:09:15.096 "nvme_admin": false, 00:09:15.096 "nvme_io": false, 00:09:15.096 "nvme_io_md": false, 00:09:15.096 "write_zeroes": true, 00:09:15.096 "zcopy": false, 00:09:15.096 "get_zone_info": false, 00:09:15.096 "zone_management": false, 00:09:15.096 "zone_append": false, 00:09:15.096 "compare": false, 00:09:15.096 "compare_and_write": false, 00:09:15.096 "abort": false, 00:09:15.096 "seek_hole": false, 00:09:15.096 "seek_data": false, 00:09:15.096 "copy": false, 00:09:15.096 "nvme_iov_md": false 00:09:15.096 }, 00:09:15.096 "memory_domains": [ 00:09:15.096 { 00:09:15.096 "dma_device_id": "system", 00:09:15.096 "dma_device_type": 1 00:09:15.096 }, 00:09:15.096 { 00:09:15.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.096 "dma_device_type": 2 00:09:15.096 }, 00:09:15.096 { 00:09:15.096 "dma_device_id": "system", 00:09:15.096 "dma_device_type": 1 00:09:15.096 }, 00:09:15.096 { 00:09:15.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.096 "dma_device_type": 2 00:09:15.096 }, 00:09:15.096 { 00:09:15.096 "dma_device_id": "system", 00:09:15.096 "dma_device_type": 1 00:09:15.096 }, 00:09:15.096 { 00:09:15.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.096 "dma_device_type": 2 00:09:15.096 } 00:09:15.096 ], 00:09:15.096 "driver_specific": { 00:09:15.096 "raid": { 00:09:15.096 "uuid": "987caf94-664d-4a04-9fa9-23a686260713", 00:09:15.096 "strip_size_kb": 64, 00:09:15.096 "state": "online", 00:09:15.096 "raid_level": "raid0", 00:09:15.096 "superblock": false, 00:09:15.096 "num_base_bdevs": 3, 00:09:15.096 "num_base_bdevs_discovered": 3, 00:09:15.096 "num_base_bdevs_operational": 3, 00:09:15.096 "base_bdevs_list": [ 00:09:15.096 { 00:09:15.096 "name": "NewBaseBdev", 00:09:15.096 "uuid": "12cc43e3-2b0d-4139-8a3b-79edd97d4ccf", 00:09:15.096 "is_configured": true, 00:09:15.096 "data_offset": 0, 00:09:15.096 "data_size": 65536 00:09:15.096 }, 00:09:15.096 { 00:09:15.096 "name": "BaseBdev2", 00:09:15.096 "uuid": "d07e6965-6a79-4377-8e2a-344973d1fdee", 00:09:15.096 "is_configured": true, 00:09:15.096 "data_offset": 0, 00:09:15.096 "data_size": 65536 00:09:15.096 }, 00:09:15.096 { 00:09:15.096 "name": "BaseBdev3", 00:09:15.096 "uuid": "ecb889a2-cdf2-46a8-b8db-71c1496e2179", 00:09:15.096 "is_configured": true, 00:09:15.096 "data_offset": 0, 00:09:15.096 "data_size": 65536 00:09:15.096 } 00:09:15.096 ] 00:09:15.096 } 00:09:15.096 } 00:09:15.096 }' 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:15.096 BaseBdev2 00:09:15.096 BaseBdev3' 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.096 [2024-10-09 03:11:58.341104] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:15.096 [2024-10-09 03:11:58.341134] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:15.096 [2024-10-09 03:11:58.341211] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:15.096 [2024-10-09 03:11:58.341269] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:15.096 [2024-10-09 03:11:58.341283] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63925 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 63925 ']' 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 63925 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 63925 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 63925' 00:09:15.096 killing process with pid 63925 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 63925 00:09:15.096 [2024-10-09 03:11:58.391857] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:15.096 03:11:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 63925 00:09:15.670 [2024-10-09 03:11:58.722630] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:17.051 03:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:17.052 00:09:17.052 real 0m10.859s 00:09:17.052 user 0m16.856s 00:09:17.052 sys 0m1.919s 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.052 ************************************ 00:09:17.052 END TEST raid_state_function_test 00:09:17.052 ************************************ 00:09:17.052 03:12:00 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:09:17.052 03:12:00 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:17.052 03:12:00 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:17.052 03:12:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:17.052 ************************************ 00:09:17.052 START TEST raid_state_function_test_sb 00:09:17.052 ************************************ 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 3 true 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64552 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64552' 00:09:17.052 Process raid pid: 64552 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64552 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 64552 ']' 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:17.052 03:12:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.052 [2024-10-09 03:12:00.270828] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:09:17.052 [2024-10-09 03:12:00.271014] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.312 [2024-10-09 03:12:00.440131] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.571 [2024-10-09 03:12:00.695260] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.830 [2024-10-09 03:12:00.923685] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.830 [2024-10-09 03:12:00.923728] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.830 03:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:17.830 03:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:17.830 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:17.830 03:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.830 03:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.830 [2024-10-09 03:12:01.091096] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:17.830 [2024-10-09 03:12:01.091245] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:17.830 [2024-10-09 03:12:01.091277] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:17.830 [2024-10-09 03:12:01.091303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:17.830 [2024-10-09 03:12:01.091320] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:17.830 [2024-10-09 03:12:01.091341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:17.830 03:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.830 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:17.830 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:17.830 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:17.831 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:17.831 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:17.831 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:17.831 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:17.831 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:17.831 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:17.831 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:17.831 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.831 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:17.831 03:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.831 03:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:17.831 03:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.088 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.088 "name": "Existed_Raid", 00:09:18.088 "uuid": "9d2214a4-542b-45b6-8a22-666b8103e157", 00:09:18.088 "strip_size_kb": 64, 00:09:18.088 "state": "configuring", 00:09:18.088 "raid_level": "raid0", 00:09:18.088 "superblock": true, 00:09:18.088 "num_base_bdevs": 3, 00:09:18.089 "num_base_bdevs_discovered": 0, 00:09:18.089 "num_base_bdevs_operational": 3, 00:09:18.089 "base_bdevs_list": [ 00:09:18.089 { 00:09:18.089 "name": "BaseBdev1", 00:09:18.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.089 "is_configured": false, 00:09:18.089 "data_offset": 0, 00:09:18.089 "data_size": 0 00:09:18.089 }, 00:09:18.089 { 00:09:18.089 "name": "BaseBdev2", 00:09:18.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.089 "is_configured": false, 00:09:18.089 "data_offset": 0, 00:09:18.089 "data_size": 0 00:09:18.089 }, 00:09:18.089 { 00:09:18.089 "name": "BaseBdev3", 00:09:18.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.089 "is_configured": false, 00:09:18.089 "data_offset": 0, 00:09:18.089 "data_size": 0 00:09:18.089 } 00:09:18.089 ] 00:09:18.089 }' 00:09:18.089 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.089 03:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.347 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:18.347 03:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.348 03:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.348 [2024-10-09 03:12:01.582263] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:18.348 [2024-10-09 03:12:01.582317] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:18.348 03:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.348 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:18.348 03:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.348 03:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.348 [2024-10-09 03:12:01.594261] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:18.348 [2024-10-09 03:12:01.594373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:18.348 [2024-10-09 03:12:01.594387] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:18.348 [2024-10-09 03:12:01.594397] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:18.348 [2024-10-09 03:12:01.594403] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:18.348 [2024-10-09 03:12:01.594413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:18.348 03:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.348 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:18.348 03:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.348 03:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.607 [2024-10-09 03:12:01.653053] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:18.607 BaseBdev1 00:09:18.607 03:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.607 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:18.607 03:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:18.607 03:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:18.607 03:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:18.607 03:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:18.607 03:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:18.607 03:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:18.607 03:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.607 03:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.607 03:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.607 03:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:18.607 03:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.607 03:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.607 [ 00:09:18.607 { 00:09:18.607 "name": "BaseBdev1", 00:09:18.607 "aliases": [ 00:09:18.607 "89940189-554d-4da2-b520-0c26bb6705ad" 00:09:18.607 ], 00:09:18.607 "product_name": "Malloc disk", 00:09:18.607 "block_size": 512, 00:09:18.607 "num_blocks": 65536, 00:09:18.607 "uuid": "89940189-554d-4da2-b520-0c26bb6705ad", 00:09:18.607 "assigned_rate_limits": { 00:09:18.607 "rw_ios_per_sec": 0, 00:09:18.607 "rw_mbytes_per_sec": 0, 00:09:18.607 "r_mbytes_per_sec": 0, 00:09:18.607 "w_mbytes_per_sec": 0 00:09:18.607 }, 00:09:18.607 "claimed": true, 00:09:18.608 "claim_type": "exclusive_write", 00:09:18.608 "zoned": false, 00:09:18.608 "supported_io_types": { 00:09:18.608 "read": true, 00:09:18.608 "write": true, 00:09:18.608 "unmap": true, 00:09:18.608 "flush": true, 00:09:18.608 "reset": true, 00:09:18.608 "nvme_admin": false, 00:09:18.608 "nvme_io": false, 00:09:18.608 "nvme_io_md": false, 00:09:18.608 "write_zeroes": true, 00:09:18.608 "zcopy": true, 00:09:18.608 "get_zone_info": false, 00:09:18.608 "zone_management": false, 00:09:18.608 "zone_append": false, 00:09:18.608 "compare": false, 00:09:18.608 "compare_and_write": false, 00:09:18.608 "abort": true, 00:09:18.608 "seek_hole": false, 00:09:18.608 "seek_data": false, 00:09:18.608 "copy": true, 00:09:18.608 "nvme_iov_md": false 00:09:18.608 }, 00:09:18.608 "memory_domains": [ 00:09:18.608 { 00:09:18.608 "dma_device_id": "system", 00:09:18.608 "dma_device_type": 1 00:09:18.608 }, 00:09:18.608 { 00:09:18.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.608 "dma_device_type": 2 00:09:18.608 } 00:09:18.608 ], 00:09:18.608 "driver_specific": {} 00:09:18.608 } 00:09:18.608 ] 00:09:18.608 03:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.608 03:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:18.608 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:18.608 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.608 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.608 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:18.608 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.608 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.608 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.608 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.608 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.608 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.608 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.608 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.608 03:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.608 03:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.608 03:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.608 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.608 "name": "Existed_Raid", 00:09:18.608 "uuid": "6ba49edd-f4af-4e7d-85e9-884017385855", 00:09:18.608 "strip_size_kb": 64, 00:09:18.608 "state": "configuring", 00:09:18.608 "raid_level": "raid0", 00:09:18.608 "superblock": true, 00:09:18.608 "num_base_bdevs": 3, 00:09:18.608 "num_base_bdevs_discovered": 1, 00:09:18.608 "num_base_bdevs_operational": 3, 00:09:18.608 "base_bdevs_list": [ 00:09:18.608 { 00:09:18.608 "name": "BaseBdev1", 00:09:18.608 "uuid": "89940189-554d-4da2-b520-0c26bb6705ad", 00:09:18.608 "is_configured": true, 00:09:18.608 "data_offset": 2048, 00:09:18.608 "data_size": 63488 00:09:18.608 }, 00:09:18.608 { 00:09:18.608 "name": "BaseBdev2", 00:09:18.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.608 "is_configured": false, 00:09:18.608 "data_offset": 0, 00:09:18.608 "data_size": 0 00:09:18.608 }, 00:09:18.608 { 00:09:18.608 "name": "BaseBdev3", 00:09:18.608 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:18.608 "is_configured": false, 00:09:18.608 "data_offset": 0, 00:09:18.608 "data_size": 0 00:09:18.608 } 00:09:18.608 ] 00:09:18.608 }' 00:09:18.608 03:12:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.608 03:12:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.868 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:18.868 03:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.868 03:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.868 [2024-10-09 03:12:02.132303] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:18.868 [2024-10-09 03:12:02.132379] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:18.868 03:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.868 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:18.868 03:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.868 03:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:18.868 [2024-10-09 03:12:02.144308] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:18.868 [2024-10-09 03:12:02.146474] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:18.868 [2024-10-09 03:12:02.146524] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:18.868 [2024-10-09 03:12:02.146534] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:18.868 [2024-10-09 03:12:02.146543] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:18.868 03:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:18.868 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:18.868 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:18.868 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:18.868 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:18.868 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:18.868 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:18.868 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.868 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:18.868 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.868 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.868 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.868 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.868 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.868 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:18.868 03:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:18.868 03:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.128 03:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.128 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.128 "name": "Existed_Raid", 00:09:19.128 "uuid": "47b2aaaf-8d5e-4a0f-9925-cfeafcdb550f", 00:09:19.128 "strip_size_kb": 64, 00:09:19.128 "state": "configuring", 00:09:19.128 "raid_level": "raid0", 00:09:19.128 "superblock": true, 00:09:19.128 "num_base_bdevs": 3, 00:09:19.128 "num_base_bdevs_discovered": 1, 00:09:19.128 "num_base_bdevs_operational": 3, 00:09:19.129 "base_bdevs_list": [ 00:09:19.129 { 00:09:19.129 "name": "BaseBdev1", 00:09:19.129 "uuid": "89940189-554d-4da2-b520-0c26bb6705ad", 00:09:19.129 "is_configured": true, 00:09:19.129 "data_offset": 2048, 00:09:19.129 "data_size": 63488 00:09:19.129 }, 00:09:19.129 { 00:09:19.129 "name": "BaseBdev2", 00:09:19.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.129 "is_configured": false, 00:09:19.129 "data_offset": 0, 00:09:19.129 "data_size": 0 00:09:19.129 }, 00:09:19.129 { 00:09:19.129 "name": "BaseBdev3", 00:09:19.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.129 "is_configured": false, 00:09:19.129 "data_offset": 0, 00:09:19.129 "data_size": 0 00:09:19.129 } 00:09:19.129 ] 00:09:19.129 }' 00:09:19.129 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.129 03:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.390 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:19.390 03:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.390 03:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.390 [2024-10-09 03:12:02.621356] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:19.390 BaseBdev2 00:09:19.390 03:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.390 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:19.390 03:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:19.391 03:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:19.391 03:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:19.391 03:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:19.391 03:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:19.391 03:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:19.391 03:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.391 03:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.391 03:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.391 03:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:19.391 03:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.391 03:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.391 [ 00:09:19.391 { 00:09:19.391 "name": "BaseBdev2", 00:09:19.391 "aliases": [ 00:09:19.391 "a975a1bc-e654-4e62-81eb-7bc83ed2d76f" 00:09:19.391 ], 00:09:19.391 "product_name": "Malloc disk", 00:09:19.391 "block_size": 512, 00:09:19.391 "num_blocks": 65536, 00:09:19.391 "uuid": "a975a1bc-e654-4e62-81eb-7bc83ed2d76f", 00:09:19.391 "assigned_rate_limits": { 00:09:19.391 "rw_ios_per_sec": 0, 00:09:19.391 "rw_mbytes_per_sec": 0, 00:09:19.391 "r_mbytes_per_sec": 0, 00:09:19.391 "w_mbytes_per_sec": 0 00:09:19.391 }, 00:09:19.391 "claimed": true, 00:09:19.391 "claim_type": "exclusive_write", 00:09:19.391 "zoned": false, 00:09:19.391 "supported_io_types": { 00:09:19.391 "read": true, 00:09:19.391 "write": true, 00:09:19.391 "unmap": true, 00:09:19.391 "flush": true, 00:09:19.391 "reset": true, 00:09:19.391 "nvme_admin": false, 00:09:19.391 "nvme_io": false, 00:09:19.391 "nvme_io_md": false, 00:09:19.391 "write_zeroes": true, 00:09:19.391 "zcopy": true, 00:09:19.391 "get_zone_info": false, 00:09:19.391 "zone_management": false, 00:09:19.391 "zone_append": false, 00:09:19.391 "compare": false, 00:09:19.391 "compare_and_write": false, 00:09:19.391 "abort": true, 00:09:19.391 "seek_hole": false, 00:09:19.391 "seek_data": false, 00:09:19.391 "copy": true, 00:09:19.391 "nvme_iov_md": false 00:09:19.391 }, 00:09:19.391 "memory_domains": [ 00:09:19.391 { 00:09:19.391 "dma_device_id": "system", 00:09:19.391 "dma_device_type": 1 00:09:19.391 }, 00:09:19.391 { 00:09:19.391 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.391 "dma_device_type": 2 00:09:19.391 } 00:09:19.391 ], 00:09:19.391 "driver_specific": {} 00:09:19.391 } 00:09:19.391 ] 00:09:19.391 03:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.391 03:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:19.391 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:19.391 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:19.391 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:19.391 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.391 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:19.391 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:19.391 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.391 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.391 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.391 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.391 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.391 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.391 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.391 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.391 03:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.391 03:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.391 03:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.657 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.657 "name": "Existed_Raid", 00:09:19.657 "uuid": "47b2aaaf-8d5e-4a0f-9925-cfeafcdb550f", 00:09:19.657 "strip_size_kb": 64, 00:09:19.657 "state": "configuring", 00:09:19.657 "raid_level": "raid0", 00:09:19.657 "superblock": true, 00:09:19.657 "num_base_bdevs": 3, 00:09:19.657 "num_base_bdevs_discovered": 2, 00:09:19.657 "num_base_bdevs_operational": 3, 00:09:19.657 "base_bdevs_list": [ 00:09:19.657 { 00:09:19.657 "name": "BaseBdev1", 00:09:19.657 "uuid": "89940189-554d-4da2-b520-0c26bb6705ad", 00:09:19.657 "is_configured": true, 00:09:19.657 "data_offset": 2048, 00:09:19.657 "data_size": 63488 00:09:19.657 }, 00:09:19.657 { 00:09:19.657 "name": "BaseBdev2", 00:09:19.657 "uuid": "a975a1bc-e654-4e62-81eb-7bc83ed2d76f", 00:09:19.657 "is_configured": true, 00:09:19.657 "data_offset": 2048, 00:09:19.657 "data_size": 63488 00:09:19.657 }, 00:09:19.657 { 00:09:19.657 "name": "BaseBdev3", 00:09:19.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:19.657 "is_configured": false, 00:09:19.657 "data_offset": 0, 00:09:19.657 "data_size": 0 00:09:19.657 } 00:09:19.657 ] 00:09:19.657 }' 00:09:19.657 03:12:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.657 03:12:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.916 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:19.916 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.916 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.916 [2024-10-09 03:12:03.151667] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:19.916 [2024-10-09 03:12:03.151971] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:19.916 [2024-10-09 03:12:03.151999] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:19.916 [2024-10-09 03:12:03.152295] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:19.916 [2024-10-09 03:12:03.152455] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:19.916 [2024-10-09 03:12:03.152468] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:19.916 BaseBdev3 00:09:19.916 [2024-10-09 03:12:03.152622] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:19.916 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.916 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:19.916 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:19.916 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:19.916 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:19.916 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:19.916 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:19.916 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:19.916 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.916 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.916 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.916 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:19.916 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.916 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.916 [ 00:09:19.916 { 00:09:19.916 "name": "BaseBdev3", 00:09:19.916 "aliases": [ 00:09:19.916 "1fd22a9e-3cbe-41da-ad22-3c7d9f20b013" 00:09:19.916 ], 00:09:19.916 "product_name": "Malloc disk", 00:09:19.916 "block_size": 512, 00:09:19.916 "num_blocks": 65536, 00:09:19.916 "uuid": "1fd22a9e-3cbe-41da-ad22-3c7d9f20b013", 00:09:19.916 "assigned_rate_limits": { 00:09:19.916 "rw_ios_per_sec": 0, 00:09:19.916 "rw_mbytes_per_sec": 0, 00:09:19.916 "r_mbytes_per_sec": 0, 00:09:19.916 "w_mbytes_per_sec": 0 00:09:19.916 }, 00:09:19.916 "claimed": true, 00:09:19.916 "claim_type": "exclusive_write", 00:09:19.916 "zoned": false, 00:09:19.916 "supported_io_types": { 00:09:19.916 "read": true, 00:09:19.916 "write": true, 00:09:19.916 "unmap": true, 00:09:19.916 "flush": true, 00:09:19.916 "reset": true, 00:09:19.916 "nvme_admin": false, 00:09:19.916 "nvme_io": false, 00:09:19.916 "nvme_io_md": false, 00:09:19.916 "write_zeroes": true, 00:09:19.916 "zcopy": true, 00:09:19.916 "get_zone_info": false, 00:09:19.916 "zone_management": false, 00:09:19.916 "zone_append": false, 00:09:19.916 "compare": false, 00:09:19.916 "compare_and_write": false, 00:09:19.916 "abort": true, 00:09:19.916 "seek_hole": false, 00:09:19.916 "seek_data": false, 00:09:19.916 "copy": true, 00:09:19.916 "nvme_iov_md": false 00:09:19.916 }, 00:09:19.916 "memory_domains": [ 00:09:19.916 { 00:09:19.916 "dma_device_id": "system", 00:09:19.916 "dma_device_type": 1 00:09:19.916 }, 00:09:19.916 { 00:09:19.916 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.916 "dma_device_type": 2 00:09:19.916 } 00:09:19.916 ], 00:09:19.916 "driver_specific": {} 00:09:19.916 } 00:09:19.916 ] 00:09:19.916 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.916 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:19.916 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:19.916 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:19.916 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:19.916 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:19.916 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:19.916 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:19.916 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.916 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:19.917 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.917 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.917 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.917 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.917 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.917 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.917 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.917 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:19.917 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.175 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.175 "name": "Existed_Raid", 00:09:20.175 "uuid": "47b2aaaf-8d5e-4a0f-9925-cfeafcdb550f", 00:09:20.175 "strip_size_kb": 64, 00:09:20.175 "state": "online", 00:09:20.175 "raid_level": "raid0", 00:09:20.175 "superblock": true, 00:09:20.176 "num_base_bdevs": 3, 00:09:20.176 "num_base_bdevs_discovered": 3, 00:09:20.176 "num_base_bdevs_operational": 3, 00:09:20.176 "base_bdevs_list": [ 00:09:20.176 { 00:09:20.176 "name": "BaseBdev1", 00:09:20.176 "uuid": "89940189-554d-4da2-b520-0c26bb6705ad", 00:09:20.176 "is_configured": true, 00:09:20.176 "data_offset": 2048, 00:09:20.176 "data_size": 63488 00:09:20.176 }, 00:09:20.176 { 00:09:20.176 "name": "BaseBdev2", 00:09:20.176 "uuid": "a975a1bc-e654-4e62-81eb-7bc83ed2d76f", 00:09:20.176 "is_configured": true, 00:09:20.176 "data_offset": 2048, 00:09:20.176 "data_size": 63488 00:09:20.176 }, 00:09:20.176 { 00:09:20.176 "name": "BaseBdev3", 00:09:20.176 "uuid": "1fd22a9e-3cbe-41da-ad22-3c7d9f20b013", 00:09:20.176 "is_configured": true, 00:09:20.176 "data_offset": 2048, 00:09:20.176 "data_size": 63488 00:09:20.176 } 00:09:20.176 ] 00:09:20.176 }' 00:09:20.176 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.176 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.435 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:20.435 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:20.435 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:20.435 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:20.435 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:20.435 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:20.435 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:20.435 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:20.435 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.435 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.435 [2024-10-09 03:12:03.627273] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:20.435 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.435 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:20.435 "name": "Existed_Raid", 00:09:20.435 "aliases": [ 00:09:20.435 "47b2aaaf-8d5e-4a0f-9925-cfeafcdb550f" 00:09:20.435 ], 00:09:20.435 "product_name": "Raid Volume", 00:09:20.435 "block_size": 512, 00:09:20.435 "num_blocks": 190464, 00:09:20.435 "uuid": "47b2aaaf-8d5e-4a0f-9925-cfeafcdb550f", 00:09:20.435 "assigned_rate_limits": { 00:09:20.435 "rw_ios_per_sec": 0, 00:09:20.435 "rw_mbytes_per_sec": 0, 00:09:20.435 "r_mbytes_per_sec": 0, 00:09:20.435 "w_mbytes_per_sec": 0 00:09:20.435 }, 00:09:20.435 "claimed": false, 00:09:20.435 "zoned": false, 00:09:20.435 "supported_io_types": { 00:09:20.435 "read": true, 00:09:20.435 "write": true, 00:09:20.435 "unmap": true, 00:09:20.435 "flush": true, 00:09:20.435 "reset": true, 00:09:20.435 "nvme_admin": false, 00:09:20.435 "nvme_io": false, 00:09:20.435 "nvme_io_md": false, 00:09:20.435 "write_zeroes": true, 00:09:20.435 "zcopy": false, 00:09:20.435 "get_zone_info": false, 00:09:20.435 "zone_management": false, 00:09:20.435 "zone_append": false, 00:09:20.435 "compare": false, 00:09:20.435 "compare_and_write": false, 00:09:20.435 "abort": false, 00:09:20.435 "seek_hole": false, 00:09:20.435 "seek_data": false, 00:09:20.435 "copy": false, 00:09:20.435 "nvme_iov_md": false 00:09:20.435 }, 00:09:20.435 "memory_domains": [ 00:09:20.435 { 00:09:20.435 "dma_device_id": "system", 00:09:20.435 "dma_device_type": 1 00:09:20.435 }, 00:09:20.435 { 00:09:20.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.435 "dma_device_type": 2 00:09:20.435 }, 00:09:20.435 { 00:09:20.435 "dma_device_id": "system", 00:09:20.436 "dma_device_type": 1 00:09:20.436 }, 00:09:20.436 { 00:09:20.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.436 "dma_device_type": 2 00:09:20.436 }, 00:09:20.436 { 00:09:20.436 "dma_device_id": "system", 00:09:20.436 "dma_device_type": 1 00:09:20.436 }, 00:09:20.436 { 00:09:20.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.436 "dma_device_type": 2 00:09:20.436 } 00:09:20.436 ], 00:09:20.436 "driver_specific": { 00:09:20.436 "raid": { 00:09:20.436 "uuid": "47b2aaaf-8d5e-4a0f-9925-cfeafcdb550f", 00:09:20.436 "strip_size_kb": 64, 00:09:20.436 "state": "online", 00:09:20.436 "raid_level": "raid0", 00:09:20.436 "superblock": true, 00:09:20.436 "num_base_bdevs": 3, 00:09:20.436 "num_base_bdevs_discovered": 3, 00:09:20.436 "num_base_bdevs_operational": 3, 00:09:20.436 "base_bdevs_list": [ 00:09:20.436 { 00:09:20.436 "name": "BaseBdev1", 00:09:20.436 "uuid": "89940189-554d-4da2-b520-0c26bb6705ad", 00:09:20.436 "is_configured": true, 00:09:20.436 "data_offset": 2048, 00:09:20.436 "data_size": 63488 00:09:20.436 }, 00:09:20.436 { 00:09:20.436 "name": "BaseBdev2", 00:09:20.436 "uuid": "a975a1bc-e654-4e62-81eb-7bc83ed2d76f", 00:09:20.436 "is_configured": true, 00:09:20.436 "data_offset": 2048, 00:09:20.436 "data_size": 63488 00:09:20.436 }, 00:09:20.436 { 00:09:20.436 "name": "BaseBdev3", 00:09:20.436 "uuid": "1fd22a9e-3cbe-41da-ad22-3c7d9f20b013", 00:09:20.436 "is_configured": true, 00:09:20.436 "data_offset": 2048, 00:09:20.436 "data_size": 63488 00:09:20.436 } 00:09:20.436 ] 00:09:20.436 } 00:09:20.436 } 00:09:20.436 }' 00:09:20.436 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:20.436 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:20.436 BaseBdev2 00:09:20.436 BaseBdev3' 00:09:20.436 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.436 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:20.436 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.436 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:20.436 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.436 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.436 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.697 [2024-10-09 03:12:03.866493] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:20.697 [2024-10-09 03:12:03.866587] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:20.697 [2024-10-09 03:12:03.866657] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:20.697 03:12:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:20.957 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:20.957 "name": "Existed_Raid", 00:09:20.957 "uuid": "47b2aaaf-8d5e-4a0f-9925-cfeafcdb550f", 00:09:20.957 "strip_size_kb": 64, 00:09:20.957 "state": "offline", 00:09:20.957 "raid_level": "raid0", 00:09:20.957 "superblock": true, 00:09:20.957 "num_base_bdevs": 3, 00:09:20.957 "num_base_bdevs_discovered": 2, 00:09:20.957 "num_base_bdevs_operational": 2, 00:09:20.957 "base_bdevs_list": [ 00:09:20.957 { 00:09:20.957 "name": null, 00:09:20.957 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:20.957 "is_configured": false, 00:09:20.957 "data_offset": 0, 00:09:20.957 "data_size": 63488 00:09:20.957 }, 00:09:20.957 { 00:09:20.957 "name": "BaseBdev2", 00:09:20.957 "uuid": "a975a1bc-e654-4e62-81eb-7bc83ed2d76f", 00:09:20.957 "is_configured": true, 00:09:20.957 "data_offset": 2048, 00:09:20.957 "data_size": 63488 00:09:20.957 }, 00:09:20.957 { 00:09:20.957 "name": "BaseBdev3", 00:09:20.957 "uuid": "1fd22a9e-3cbe-41da-ad22-3c7d9f20b013", 00:09:20.957 "is_configured": true, 00:09:20.957 "data_offset": 2048, 00:09:20.957 "data_size": 63488 00:09:20.957 } 00:09:20.957 ] 00:09:20.957 }' 00:09:20.957 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:20.957 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.217 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:21.217 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:21.217 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.217 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:21.217 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.217 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.217 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.217 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:21.217 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:21.217 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:21.217 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.217 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.217 [2024-10-09 03:12:04.377848] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:21.217 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.217 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:21.217 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:21.217 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.217 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:21.217 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.217 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.217 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.477 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:21.477 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:21.477 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:21.477 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.477 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.477 [2024-10-09 03:12:04.543231] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:21.477 [2024-10-09 03:12:04.543300] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:21.477 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.477 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:21.477 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:21.477 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.477 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:21.477 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.477 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.477 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.477 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:21.477 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:21.477 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:21.477 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:21.477 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:21.477 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:21.477 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.478 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.478 BaseBdev2 00:09:21.478 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.478 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:21.478 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:21.478 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:21.478 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:21.478 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:21.478 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:21.478 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:21.478 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.478 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.478 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.478 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:21.478 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.478 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.478 [ 00:09:21.478 { 00:09:21.478 "name": "BaseBdev2", 00:09:21.478 "aliases": [ 00:09:21.478 "67b7df7b-102f-460e-90ab-eeca32113f4c" 00:09:21.478 ], 00:09:21.478 "product_name": "Malloc disk", 00:09:21.478 "block_size": 512, 00:09:21.478 "num_blocks": 65536, 00:09:21.478 "uuid": "67b7df7b-102f-460e-90ab-eeca32113f4c", 00:09:21.478 "assigned_rate_limits": { 00:09:21.478 "rw_ios_per_sec": 0, 00:09:21.478 "rw_mbytes_per_sec": 0, 00:09:21.478 "r_mbytes_per_sec": 0, 00:09:21.478 "w_mbytes_per_sec": 0 00:09:21.478 }, 00:09:21.478 "claimed": false, 00:09:21.478 "zoned": false, 00:09:21.478 "supported_io_types": { 00:09:21.478 "read": true, 00:09:21.478 "write": true, 00:09:21.478 "unmap": true, 00:09:21.478 "flush": true, 00:09:21.478 "reset": true, 00:09:21.478 "nvme_admin": false, 00:09:21.478 "nvme_io": false, 00:09:21.478 "nvme_io_md": false, 00:09:21.478 "write_zeroes": true, 00:09:21.478 "zcopy": true, 00:09:21.478 "get_zone_info": false, 00:09:21.478 "zone_management": false, 00:09:21.478 "zone_append": false, 00:09:21.738 "compare": false, 00:09:21.738 "compare_and_write": false, 00:09:21.738 "abort": true, 00:09:21.738 "seek_hole": false, 00:09:21.738 "seek_data": false, 00:09:21.738 "copy": true, 00:09:21.738 "nvme_iov_md": false 00:09:21.738 }, 00:09:21.738 "memory_domains": [ 00:09:21.738 { 00:09:21.738 "dma_device_id": "system", 00:09:21.738 "dma_device_type": 1 00:09:21.738 }, 00:09:21.738 { 00:09:21.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.738 "dma_device_type": 2 00:09:21.738 } 00:09:21.738 ], 00:09:21.738 "driver_specific": {} 00:09:21.738 } 00:09:21.738 ] 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.738 BaseBdev3 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.738 [ 00:09:21.738 { 00:09:21.738 "name": "BaseBdev3", 00:09:21.738 "aliases": [ 00:09:21.738 "a4496442-33dc-4dfd-8b26-51cbaa42a635" 00:09:21.738 ], 00:09:21.738 "product_name": "Malloc disk", 00:09:21.738 "block_size": 512, 00:09:21.738 "num_blocks": 65536, 00:09:21.738 "uuid": "a4496442-33dc-4dfd-8b26-51cbaa42a635", 00:09:21.738 "assigned_rate_limits": { 00:09:21.738 "rw_ios_per_sec": 0, 00:09:21.738 "rw_mbytes_per_sec": 0, 00:09:21.738 "r_mbytes_per_sec": 0, 00:09:21.738 "w_mbytes_per_sec": 0 00:09:21.738 }, 00:09:21.738 "claimed": false, 00:09:21.738 "zoned": false, 00:09:21.738 "supported_io_types": { 00:09:21.738 "read": true, 00:09:21.738 "write": true, 00:09:21.738 "unmap": true, 00:09:21.738 "flush": true, 00:09:21.738 "reset": true, 00:09:21.738 "nvme_admin": false, 00:09:21.738 "nvme_io": false, 00:09:21.738 "nvme_io_md": false, 00:09:21.738 "write_zeroes": true, 00:09:21.738 "zcopy": true, 00:09:21.738 "get_zone_info": false, 00:09:21.738 "zone_management": false, 00:09:21.738 "zone_append": false, 00:09:21.738 "compare": false, 00:09:21.738 "compare_and_write": false, 00:09:21.738 "abort": true, 00:09:21.738 "seek_hole": false, 00:09:21.738 "seek_data": false, 00:09:21.738 "copy": true, 00:09:21.738 "nvme_iov_md": false 00:09:21.738 }, 00:09:21.738 "memory_domains": [ 00:09:21.738 { 00:09:21.738 "dma_device_id": "system", 00:09:21.738 "dma_device_type": 1 00:09:21.738 }, 00:09:21.738 { 00:09:21.738 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:21.738 "dma_device_type": 2 00:09:21.738 } 00:09:21.738 ], 00:09:21.738 "driver_specific": {} 00:09:21.738 } 00:09:21.738 ] 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.738 [2024-10-09 03:12:04.879548] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:21.738 [2024-10-09 03:12:04.879694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:21.738 [2024-10-09 03:12:04.879737] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:21.738 [2024-10-09 03:12:04.881780] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.738 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:21.738 "name": "Existed_Raid", 00:09:21.738 "uuid": "b9482f5a-1de1-433d-ac9b-407a7d01e33f", 00:09:21.738 "strip_size_kb": 64, 00:09:21.738 "state": "configuring", 00:09:21.738 "raid_level": "raid0", 00:09:21.738 "superblock": true, 00:09:21.738 "num_base_bdevs": 3, 00:09:21.738 "num_base_bdevs_discovered": 2, 00:09:21.738 "num_base_bdevs_operational": 3, 00:09:21.738 "base_bdevs_list": [ 00:09:21.738 { 00:09:21.738 "name": "BaseBdev1", 00:09:21.738 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.738 "is_configured": false, 00:09:21.738 "data_offset": 0, 00:09:21.738 "data_size": 0 00:09:21.738 }, 00:09:21.738 { 00:09:21.738 "name": "BaseBdev2", 00:09:21.738 "uuid": "67b7df7b-102f-460e-90ab-eeca32113f4c", 00:09:21.738 "is_configured": true, 00:09:21.738 "data_offset": 2048, 00:09:21.738 "data_size": 63488 00:09:21.738 }, 00:09:21.738 { 00:09:21.738 "name": "BaseBdev3", 00:09:21.738 "uuid": "a4496442-33dc-4dfd-8b26-51cbaa42a635", 00:09:21.739 "is_configured": true, 00:09:21.739 "data_offset": 2048, 00:09:21.739 "data_size": 63488 00:09:21.739 } 00:09:21.739 ] 00:09:21.739 }' 00:09:21.739 03:12:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:21.739 03:12:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.998 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:21.998 03:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.998 03:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.998 [2024-10-09 03:12:05.266885] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:21.998 03:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:21.998 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:21.998 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:21.998 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:21.998 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:21.998 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:21.998 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:21.998 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:21.998 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:21.998 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:21.998 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:21.998 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.998 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:21.998 03:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:21.998 03:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:21.998 03:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.257 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.257 "name": "Existed_Raid", 00:09:22.257 "uuid": "b9482f5a-1de1-433d-ac9b-407a7d01e33f", 00:09:22.257 "strip_size_kb": 64, 00:09:22.257 "state": "configuring", 00:09:22.257 "raid_level": "raid0", 00:09:22.257 "superblock": true, 00:09:22.257 "num_base_bdevs": 3, 00:09:22.257 "num_base_bdevs_discovered": 1, 00:09:22.257 "num_base_bdevs_operational": 3, 00:09:22.257 "base_bdevs_list": [ 00:09:22.257 { 00:09:22.257 "name": "BaseBdev1", 00:09:22.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:22.257 "is_configured": false, 00:09:22.257 "data_offset": 0, 00:09:22.257 "data_size": 0 00:09:22.257 }, 00:09:22.257 { 00:09:22.257 "name": null, 00:09:22.257 "uuid": "67b7df7b-102f-460e-90ab-eeca32113f4c", 00:09:22.257 "is_configured": false, 00:09:22.257 "data_offset": 0, 00:09:22.257 "data_size": 63488 00:09:22.257 }, 00:09:22.257 { 00:09:22.257 "name": "BaseBdev3", 00:09:22.257 "uuid": "a4496442-33dc-4dfd-8b26-51cbaa42a635", 00:09:22.257 "is_configured": true, 00:09:22.257 "data_offset": 2048, 00:09:22.257 "data_size": 63488 00:09:22.257 } 00:09:22.257 ] 00:09:22.257 }' 00:09:22.257 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.257 03:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.515 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.515 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:22.515 03:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.515 03:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.515 03:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.515 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:22.515 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:22.515 03:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.515 03:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.515 [2024-10-09 03:12:05.801976] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:22.515 BaseBdev1 00:09:22.515 03:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.515 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:22.515 03:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:22.515 03:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:22.516 03:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:22.516 03:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:22.516 03:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:22.516 03:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:22.516 03:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.516 03:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.516 03:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.516 03:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:22.516 03:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.516 03:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.773 [ 00:09:22.773 { 00:09:22.773 "name": "BaseBdev1", 00:09:22.773 "aliases": [ 00:09:22.773 "a488c45f-27b5-4245-acec-3329971bfc1a" 00:09:22.773 ], 00:09:22.773 "product_name": "Malloc disk", 00:09:22.773 "block_size": 512, 00:09:22.773 "num_blocks": 65536, 00:09:22.773 "uuid": "a488c45f-27b5-4245-acec-3329971bfc1a", 00:09:22.773 "assigned_rate_limits": { 00:09:22.773 "rw_ios_per_sec": 0, 00:09:22.773 "rw_mbytes_per_sec": 0, 00:09:22.773 "r_mbytes_per_sec": 0, 00:09:22.773 "w_mbytes_per_sec": 0 00:09:22.773 }, 00:09:22.773 "claimed": true, 00:09:22.773 "claim_type": "exclusive_write", 00:09:22.773 "zoned": false, 00:09:22.773 "supported_io_types": { 00:09:22.773 "read": true, 00:09:22.773 "write": true, 00:09:22.773 "unmap": true, 00:09:22.773 "flush": true, 00:09:22.773 "reset": true, 00:09:22.773 "nvme_admin": false, 00:09:22.773 "nvme_io": false, 00:09:22.773 "nvme_io_md": false, 00:09:22.773 "write_zeroes": true, 00:09:22.773 "zcopy": true, 00:09:22.773 "get_zone_info": false, 00:09:22.773 "zone_management": false, 00:09:22.773 "zone_append": false, 00:09:22.773 "compare": false, 00:09:22.773 "compare_and_write": false, 00:09:22.773 "abort": true, 00:09:22.773 "seek_hole": false, 00:09:22.773 "seek_data": false, 00:09:22.773 "copy": true, 00:09:22.773 "nvme_iov_md": false 00:09:22.773 }, 00:09:22.773 "memory_domains": [ 00:09:22.773 { 00:09:22.773 "dma_device_id": "system", 00:09:22.773 "dma_device_type": 1 00:09:22.773 }, 00:09:22.773 { 00:09:22.773 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:22.773 "dma_device_type": 2 00:09:22.773 } 00:09:22.773 ], 00:09:22.773 "driver_specific": {} 00:09:22.773 } 00:09:22.773 ] 00:09:22.773 03:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.773 03:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:22.773 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:22.773 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:22.773 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:22.773 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:22.773 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.773 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:22.773 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.773 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.773 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.774 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.774 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.774 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:22.774 03:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:22.774 03:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:22.774 03:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:22.774 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:22.774 "name": "Existed_Raid", 00:09:22.774 "uuid": "b9482f5a-1de1-433d-ac9b-407a7d01e33f", 00:09:22.774 "strip_size_kb": 64, 00:09:22.774 "state": "configuring", 00:09:22.774 "raid_level": "raid0", 00:09:22.774 "superblock": true, 00:09:22.774 "num_base_bdevs": 3, 00:09:22.774 "num_base_bdevs_discovered": 2, 00:09:22.774 "num_base_bdevs_operational": 3, 00:09:22.774 "base_bdevs_list": [ 00:09:22.774 { 00:09:22.774 "name": "BaseBdev1", 00:09:22.774 "uuid": "a488c45f-27b5-4245-acec-3329971bfc1a", 00:09:22.774 "is_configured": true, 00:09:22.774 "data_offset": 2048, 00:09:22.774 "data_size": 63488 00:09:22.774 }, 00:09:22.774 { 00:09:22.774 "name": null, 00:09:22.774 "uuid": "67b7df7b-102f-460e-90ab-eeca32113f4c", 00:09:22.774 "is_configured": false, 00:09:22.774 "data_offset": 0, 00:09:22.774 "data_size": 63488 00:09:22.774 }, 00:09:22.774 { 00:09:22.774 "name": "BaseBdev3", 00:09:22.774 "uuid": "a4496442-33dc-4dfd-8b26-51cbaa42a635", 00:09:22.774 "is_configured": true, 00:09:22.774 "data_offset": 2048, 00:09:22.774 "data_size": 63488 00:09:22.774 } 00:09:22.774 ] 00:09:22.774 }' 00:09:22.774 03:12:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:22.774 03:12:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.032 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.032 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:23.032 03:12:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.032 03:12:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.032 03:12:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.032 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:23.032 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:23.032 03:12:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.032 03:12:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.032 [2024-10-09 03:12:06.241520] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:23.032 03:12:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.032 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:23.032 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.032 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.032 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:23.032 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.032 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.032 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.032 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.032 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.032 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.032 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.032 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.032 03:12:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.032 03:12:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.032 03:12:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.032 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.032 "name": "Existed_Raid", 00:09:23.032 "uuid": "b9482f5a-1de1-433d-ac9b-407a7d01e33f", 00:09:23.032 "strip_size_kb": 64, 00:09:23.032 "state": "configuring", 00:09:23.032 "raid_level": "raid0", 00:09:23.032 "superblock": true, 00:09:23.032 "num_base_bdevs": 3, 00:09:23.032 "num_base_bdevs_discovered": 1, 00:09:23.032 "num_base_bdevs_operational": 3, 00:09:23.032 "base_bdevs_list": [ 00:09:23.032 { 00:09:23.032 "name": "BaseBdev1", 00:09:23.032 "uuid": "a488c45f-27b5-4245-acec-3329971bfc1a", 00:09:23.032 "is_configured": true, 00:09:23.032 "data_offset": 2048, 00:09:23.032 "data_size": 63488 00:09:23.032 }, 00:09:23.032 { 00:09:23.032 "name": null, 00:09:23.032 "uuid": "67b7df7b-102f-460e-90ab-eeca32113f4c", 00:09:23.032 "is_configured": false, 00:09:23.032 "data_offset": 0, 00:09:23.032 "data_size": 63488 00:09:23.032 }, 00:09:23.032 { 00:09:23.032 "name": null, 00:09:23.032 "uuid": "a4496442-33dc-4dfd-8b26-51cbaa42a635", 00:09:23.032 "is_configured": false, 00:09:23.032 "data_offset": 0, 00:09:23.032 "data_size": 63488 00:09:23.032 } 00:09:23.032 ] 00:09:23.032 }' 00:09:23.032 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.032 03:12:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.657 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.657 03:12:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.657 03:12:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.657 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:23.657 03:12:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.657 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:23.657 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:23.657 03:12:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.657 03:12:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.657 [2024-10-09 03:12:06.768652] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:23.657 03:12:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.657 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:23.657 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:23.657 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:23.657 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:23.657 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:23.657 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:23.657 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:23.657 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:23.657 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:23.657 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:23.657 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:23.657 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.657 03:12:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.657 03:12:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.657 03:12:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:23.657 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.657 "name": "Existed_Raid", 00:09:23.657 "uuid": "b9482f5a-1de1-433d-ac9b-407a7d01e33f", 00:09:23.657 "strip_size_kb": 64, 00:09:23.657 "state": "configuring", 00:09:23.657 "raid_level": "raid0", 00:09:23.657 "superblock": true, 00:09:23.657 "num_base_bdevs": 3, 00:09:23.657 "num_base_bdevs_discovered": 2, 00:09:23.657 "num_base_bdevs_operational": 3, 00:09:23.657 "base_bdevs_list": [ 00:09:23.657 { 00:09:23.657 "name": "BaseBdev1", 00:09:23.657 "uuid": "a488c45f-27b5-4245-acec-3329971bfc1a", 00:09:23.657 "is_configured": true, 00:09:23.657 "data_offset": 2048, 00:09:23.657 "data_size": 63488 00:09:23.657 }, 00:09:23.657 { 00:09:23.657 "name": null, 00:09:23.657 "uuid": "67b7df7b-102f-460e-90ab-eeca32113f4c", 00:09:23.657 "is_configured": false, 00:09:23.657 "data_offset": 0, 00:09:23.657 "data_size": 63488 00:09:23.657 }, 00:09:23.657 { 00:09:23.657 "name": "BaseBdev3", 00:09:23.657 "uuid": "a4496442-33dc-4dfd-8b26-51cbaa42a635", 00:09:23.657 "is_configured": true, 00:09:23.657 "data_offset": 2048, 00:09:23.657 "data_size": 63488 00:09:23.657 } 00:09:23.657 ] 00:09:23.657 }' 00:09:23.657 03:12:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.657 03:12:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.916 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:23.916 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:23.916 03:12:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:23.916 03:12:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:23.916 03:12:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.175 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:24.175 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:24.175 03:12:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.175 03:12:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.175 [2024-10-09 03:12:07.243958] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:24.175 03:12:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.175 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:24.175 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.175 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.175 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:24.175 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.175 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.175 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.175 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.175 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.175 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.175 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.175 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.175 03:12:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.175 03:12:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.175 03:12:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.175 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.175 "name": "Existed_Raid", 00:09:24.175 "uuid": "b9482f5a-1de1-433d-ac9b-407a7d01e33f", 00:09:24.175 "strip_size_kb": 64, 00:09:24.175 "state": "configuring", 00:09:24.175 "raid_level": "raid0", 00:09:24.175 "superblock": true, 00:09:24.175 "num_base_bdevs": 3, 00:09:24.175 "num_base_bdevs_discovered": 1, 00:09:24.175 "num_base_bdevs_operational": 3, 00:09:24.175 "base_bdevs_list": [ 00:09:24.175 { 00:09:24.175 "name": null, 00:09:24.175 "uuid": "a488c45f-27b5-4245-acec-3329971bfc1a", 00:09:24.175 "is_configured": false, 00:09:24.175 "data_offset": 0, 00:09:24.175 "data_size": 63488 00:09:24.175 }, 00:09:24.175 { 00:09:24.176 "name": null, 00:09:24.176 "uuid": "67b7df7b-102f-460e-90ab-eeca32113f4c", 00:09:24.176 "is_configured": false, 00:09:24.176 "data_offset": 0, 00:09:24.176 "data_size": 63488 00:09:24.176 }, 00:09:24.176 { 00:09:24.176 "name": "BaseBdev3", 00:09:24.176 "uuid": "a4496442-33dc-4dfd-8b26-51cbaa42a635", 00:09:24.176 "is_configured": true, 00:09:24.176 "data_offset": 2048, 00:09:24.176 "data_size": 63488 00:09:24.176 } 00:09:24.176 ] 00:09:24.176 }' 00:09:24.176 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.176 03:12:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.742 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.742 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:24.742 03:12:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.742 03:12:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.742 03:12:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.742 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:24.742 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:24.742 03:12:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.742 03:12:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.742 [2024-10-09 03:12:07.806648] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:24.742 03:12:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.742 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:24.742 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:24.742 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:24.742 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:24.742 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.742 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:24.742 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.743 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.743 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.743 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.743 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:24.743 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.743 03:12:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.743 03:12:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:24.743 03:12:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.743 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.743 "name": "Existed_Raid", 00:09:24.743 "uuid": "b9482f5a-1de1-433d-ac9b-407a7d01e33f", 00:09:24.743 "strip_size_kb": 64, 00:09:24.743 "state": "configuring", 00:09:24.743 "raid_level": "raid0", 00:09:24.743 "superblock": true, 00:09:24.743 "num_base_bdevs": 3, 00:09:24.743 "num_base_bdevs_discovered": 2, 00:09:24.743 "num_base_bdevs_operational": 3, 00:09:24.743 "base_bdevs_list": [ 00:09:24.743 { 00:09:24.743 "name": null, 00:09:24.743 "uuid": "a488c45f-27b5-4245-acec-3329971bfc1a", 00:09:24.743 "is_configured": false, 00:09:24.743 "data_offset": 0, 00:09:24.743 "data_size": 63488 00:09:24.743 }, 00:09:24.743 { 00:09:24.743 "name": "BaseBdev2", 00:09:24.743 "uuid": "67b7df7b-102f-460e-90ab-eeca32113f4c", 00:09:24.743 "is_configured": true, 00:09:24.743 "data_offset": 2048, 00:09:24.743 "data_size": 63488 00:09:24.743 }, 00:09:24.743 { 00:09:24.743 "name": "BaseBdev3", 00:09:24.743 "uuid": "a4496442-33dc-4dfd-8b26-51cbaa42a635", 00:09:24.743 "is_configured": true, 00:09:24.743 "data_offset": 2048, 00:09:24.743 "data_size": 63488 00:09:24.743 } 00:09:24.743 ] 00:09:24.743 }' 00:09:24.743 03:12:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.743 03:12:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.002 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.002 03:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.002 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:25.002 03:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.002 03:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.002 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:25.262 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.262 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u a488c45f-27b5-4245-acec-3329971bfc1a 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.263 [2024-10-09 03:12:08.396198] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:25.263 [2024-10-09 03:12:08.396436] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:25.263 [2024-10-09 03:12:08.396458] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:25.263 [2024-10-09 03:12:08.396751] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:25.263 [2024-10-09 03:12:08.396914] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:25.263 [2024-10-09 03:12:08.396933] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:25.263 NewBaseBdev 00:09:25.263 [2024-10-09 03:12:08.397075] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.263 [ 00:09:25.263 { 00:09:25.263 "name": "NewBaseBdev", 00:09:25.263 "aliases": [ 00:09:25.263 "a488c45f-27b5-4245-acec-3329971bfc1a" 00:09:25.263 ], 00:09:25.263 "product_name": "Malloc disk", 00:09:25.263 "block_size": 512, 00:09:25.263 "num_blocks": 65536, 00:09:25.263 "uuid": "a488c45f-27b5-4245-acec-3329971bfc1a", 00:09:25.263 "assigned_rate_limits": { 00:09:25.263 "rw_ios_per_sec": 0, 00:09:25.263 "rw_mbytes_per_sec": 0, 00:09:25.263 "r_mbytes_per_sec": 0, 00:09:25.263 "w_mbytes_per_sec": 0 00:09:25.263 }, 00:09:25.263 "claimed": true, 00:09:25.263 "claim_type": "exclusive_write", 00:09:25.263 "zoned": false, 00:09:25.263 "supported_io_types": { 00:09:25.263 "read": true, 00:09:25.263 "write": true, 00:09:25.263 "unmap": true, 00:09:25.263 "flush": true, 00:09:25.263 "reset": true, 00:09:25.263 "nvme_admin": false, 00:09:25.263 "nvme_io": false, 00:09:25.263 "nvme_io_md": false, 00:09:25.263 "write_zeroes": true, 00:09:25.263 "zcopy": true, 00:09:25.263 "get_zone_info": false, 00:09:25.263 "zone_management": false, 00:09:25.263 "zone_append": false, 00:09:25.263 "compare": false, 00:09:25.263 "compare_and_write": false, 00:09:25.263 "abort": true, 00:09:25.263 "seek_hole": false, 00:09:25.263 "seek_data": false, 00:09:25.263 "copy": true, 00:09:25.263 "nvme_iov_md": false 00:09:25.263 }, 00:09:25.263 "memory_domains": [ 00:09:25.263 { 00:09:25.263 "dma_device_id": "system", 00:09:25.263 "dma_device_type": 1 00:09:25.263 }, 00:09:25.263 { 00:09:25.263 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.263 "dma_device_type": 2 00:09:25.263 } 00:09:25.263 ], 00:09:25.263 "driver_specific": {} 00:09:25.263 } 00:09:25.263 ] 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:25.263 "name": "Existed_Raid", 00:09:25.263 "uuid": "b9482f5a-1de1-433d-ac9b-407a7d01e33f", 00:09:25.263 "strip_size_kb": 64, 00:09:25.263 "state": "online", 00:09:25.263 "raid_level": "raid0", 00:09:25.263 "superblock": true, 00:09:25.263 "num_base_bdevs": 3, 00:09:25.263 "num_base_bdevs_discovered": 3, 00:09:25.263 "num_base_bdevs_operational": 3, 00:09:25.263 "base_bdevs_list": [ 00:09:25.263 { 00:09:25.263 "name": "NewBaseBdev", 00:09:25.263 "uuid": "a488c45f-27b5-4245-acec-3329971bfc1a", 00:09:25.263 "is_configured": true, 00:09:25.263 "data_offset": 2048, 00:09:25.263 "data_size": 63488 00:09:25.263 }, 00:09:25.263 { 00:09:25.263 "name": "BaseBdev2", 00:09:25.263 "uuid": "67b7df7b-102f-460e-90ab-eeca32113f4c", 00:09:25.263 "is_configured": true, 00:09:25.263 "data_offset": 2048, 00:09:25.263 "data_size": 63488 00:09:25.263 }, 00:09:25.263 { 00:09:25.263 "name": "BaseBdev3", 00:09:25.263 "uuid": "a4496442-33dc-4dfd-8b26-51cbaa42a635", 00:09:25.263 "is_configured": true, 00:09:25.263 "data_offset": 2048, 00:09:25.263 "data_size": 63488 00:09:25.263 } 00:09:25.263 ] 00:09:25.263 }' 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:25.263 03:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.523 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:25.523 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:25.523 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:25.523 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:25.523 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:25.523 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:25.523 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:25.523 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:25.523 03:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.523 03:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.523 [2024-10-09 03:12:08.803936] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:25.783 03:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.783 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:25.783 "name": "Existed_Raid", 00:09:25.783 "aliases": [ 00:09:25.783 "b9482f5a-1de1-433d-ac9b-407a7d01e33f" 00:09:25.783 ], 00:09:25.783 "product_name": "Raid Volume", 00:09:25.783 "block_size": 512, 00:09:25.783 "num_blocks": 190464, 00:09:25.783 "uuid": "b9482f5a-1de1-433d-ac9b-407a7d01e33f", 00:09:25.783 "assigned_rate_limits": { 00:09:25.783 "rw_ios_per_sec": 0, 00:09:25.783 "rw_mbytes_per_sec": 0, 00:09:25.783 "r_mbytes_per_sec": 0, 00:09:25.783 "w_mbytes_per_sec": 0 00:09:25.783 }, 00:09:25.783 "claimed": false, 00:09:25.783 "zoned": false, 00:09:25.783 "supported_io_types": { 00:09:25.783 "read": true, 00:09:25.783 "write": true, 00:09:25.783 "unmap": true, 00:09:25.783 "flush": true, 00:09:25.783 "reset": true, 00:09:25.783 "nvme_admin": false, 00:09:25.783 "nvme_io": false, 00:09:25.783 "nvme_io_md": false, 00:09:25.783 "write_zeroes": true, 00:09:25.783 "zcopy": false, 00:09:25.783 "get_zone_info": false, 00:09:25.783 "zone_management": false, 00:09:25.783 "zone_append": false, 00:09:25.783 "compare": false, 00:09:25.783 "compare_and_write": false, 00:09:25.783 "abort": false, 00:09:25.783 "seek_hole": false, 00:09:25.783 "seek_data": false, 00:09:25.783 "copy": false, 00:09:25.783 "nvme_iov_md": false 00:09:25.783 }, 00:09:25.783 "memory_domains": [ 00:09:25.783 { 00:09:25.783 "dma_device_id": "system", 00:09:25.783 "dma_device_type": 1 00:09:25.783 }, 00:09:25.783 { 00:09:25.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.783 "dma_device_type": 2 00:09:25.783 }, 00:09:25.783 { 00:09:25.783 "dma_device_id": "system", 00:09:25.783 "dma_device_type": 1 00:09:25.783 }, 00:09:25.783 { 00:09:25.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.783 "dma_device_type": 2 00:09:25.783 }, 00:09:25.783 { 00:09:25.783 "dma_device_id": "system", 00:09:25.783 "dma_device_type": 1 00:09:25.783 }, 00:09:25.783 { 00:09:25.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:25.783 "dma_device_type": 2 00:09:25.783 } 00:09:25.783 ], 00:09:25.783 "driver_specific": { 00:09:25.783 "raid": { 00:09:25.783 "uuid": "b9482f5a-1de1-433d-ac9b-407a7d01e33f", 00:09:25.783 "strip_size_kb": 64, 00:09:25.783 "state": "online", 00:09:25.783 "raid_level": "raid0", 00:09:25.783 "superblock": true, 00:09:25.783 "num_base_bdevs": 3, 00:09:25.783 "num_base_bdevs_discovered": 3, 00:09:25.783 "num_base_bdevs_operational": 3, 00:09:25.783 "base_bdevs_list": [ 00:09:25.783 { 00:09:25.783 "name": "NewBaseBdev", 00:09:25.783 "uuid": "a488c45f-27b5-4245-acec-3329971bfc1a", 00:09:25.783 "is_configured": true, 00:09:25.783 "data_offset": 2048, 00:09:25.783 "data_size": 63488 00:09:25.783 }, 00:09:25.783 { 00:09:25.783 "name": "BaseBdev2", 00:09:25.783 "uuid": "67b7df7b-102f-460e-90ab-eeca32113f4c", 00:09:25.783 "is_configured": true, 00:09:25.783 "data_offset": 2048, 00:09:25.783 "data_size": 63488 00:09:25.783 }, 00:09:25.783 { 00:09:25.783 "name": "BaseBdev3", 00:09:25.783 "uuid": "a4496442-33dc-4dfd-8b26-51cbaa42a635", 00:09:25.783 "is_configured": true, 00:09:25.783 "data_offset": 2048, 00:09:25.783 "data_size": 63488 00:09:25.783 } 00:09:25.783 ] 00:09:25.783 } 00:09:25.783 } 00:09:25.783 }' 00:09:25.783 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:25.783 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:25.783 BaseBdev2 00:09:25.783 BaseBdev3' 00:09:25.783 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.783 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:25.783 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.783 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.783 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:25.783 03:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.783 03:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.783 03:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.783 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.783 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.783 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.783 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:25.783 03:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.783 03:12:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.783 03:12:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.783 03:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.783 03:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.783 03:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.783 03:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:25.783 03:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:25.783 03:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:25.783 03:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.783 03:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.783 03:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:25.783 03:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:25.783 03:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:25.783 03:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:25.783 03:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:25.783 03:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:25.783 [2024-10-09 03:12:09.079061] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:25.783 [2024-10-09 03:12:09.079179] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:25.783 [2024-10-09 03:12:09.079283] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:25.783 [2024-10-09 03:12:09.079363] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:25.783 [2024-10-09 03:12:09.079418] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:25.783 03:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.043 03:12:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64552 00:09:26.043 03:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 64552 ']' 00:09:26.043 03:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 64552 00:09:26.043 03:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:09:26.043 03:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:26.043 03:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 64552 00:09:26.043 killing process with pid 64552 00:09:26.043 03:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:26.043 03:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:26.043 03:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 64552' 00:09:26.043 03:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 64552 00:09:26.043 [2024-10-09 03:12:09.126244] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:26.043 03:12:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 64552 00:09:26.302 [2024-10-09 03:12:09.454053] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:27.691 03:12:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:27.691 00:09:27.691 real 0m10.670s 00:09:27.691 user 0m16.575s 00:09:27.691 sys 0m1.900s 00:09:27.691 ************************************ 00:09:27.691 END TEST raid_state_function_test_sb 00:09:27.691 ************************************ 00:09:27.691 03:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:27.691 03:12:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:27.691 03:12:10 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:09:27.691 03:12:10 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:27.691 03:12:10 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:27.691 03:12:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:27.691 ************************************ 00:09:27.691 START TEST raid_superblock_test 00:09:27.691 ************************************ 00:09:27.691 03:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 3 00:09:27.691 03:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:09:27.691 03:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:09:27.691 03:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:27.691 03:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:27.691 03:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:27.691 03:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:27.691 03:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:27.691 03:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:27.691 03:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:27.691 03:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:27.691 03:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:27.691 03:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:27.691 03:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:27.691 03:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:09:27.691 03:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:27.691 03:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:27.691 03:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=65172 00:09:27.691 03:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:27.691 03:12:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 65172 00:09:27.691 03:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 65172 ']' 00:09:27.691 03:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:27.691 03:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:27.691 03:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:27.691 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:27.691 03:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:27.691 03:12:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.951 [2024-10-09 03:12:10.999495] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:09:27.951 [2024-10-09 03:12:10.999613] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65172 ] 00:09:27.951 [2024-10-09 03:12:11.151261] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:28.211 [2024-10-09 03:12:11.400118] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:28.471 [2024-10-09 03:12:11.635505] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.471 [2024-10-09 03:12:11.635656] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.731 malloc1 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.731 [2024-10-09 03:12:11.878916] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:28.731 [2024-10-09 03:12:11.879077] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.731 [2024-10-09 03:12:11.879122] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:28.731 [2024-10-09 03:12:11.879156] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.731 [2024-10-09 03:12:11.881517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.731 [2024-10-09 03:12:11.881591] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:28.731 pt1 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.731 malloc2 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.731 [2024-10-09 03:12:11.949709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:28.731 [2024-10-09 03:12:11.949825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.731 [2024-10-09 03:12:11.949879] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:28.731 [2024-10-09 03:12:11.949913] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.731 [2024-10-09 03:12:11.952300] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.731 [2024-10-09 03:12:11.952371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:28.731 pt2 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.731 03:12:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.731 malloc3 00:09:28.731 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.731 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:28.731 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.731 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.731 [2024-10-09 03:12:12.012088] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:28.731 [2024-10-09 03:12:12.012145] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:28.731 [2024-10-09 03:12:12.012171] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:28.731 [2024-10-09 03:12:12.012181] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:28.731 [2024-10-09 03:12:12.014614] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:28.731 [2024-10-09 03:12:12.014653] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:28.731 pt3 00:09:28.731 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.731 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:28.731 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:28.731 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:09:28.731 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.731 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.731 [2024-10-09 03:12:12.024149] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:28.731 [2024-10-09 03:12:12.026384] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:28.731 [2024-10-09 03:12:12.026503] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:28.732 [2024-10-09 03:12:12.026703] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:28.732 [2024-10-09 03:12:12.026752] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:28.732 [2024-10-09 03:12:12.027033] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:28.732 [2024-10-09 03:12:12.027236] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:28.732 [2024-10-09 03:12:12.027278] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:28.732 [2024-10-09 03:12:12.027471] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.732 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.732 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:28.732 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:28.732 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:28.732 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:28.732 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:28.732 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:28.732 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.732 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.732 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.732 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.991 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.991 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:28.991 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:28.991 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.991 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:28.991 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.991 "name": "raid_bdev1", 00:09:28.991 "uuid": "7f08144e-5914-4a76-806f-adf826833661", 00:09:28.991 "strip_size_kb": 64, 00:09:28.991 "state": "online", 00:09:28.991 "raid_level": "raid0", 00:09:28.991 "superblock": true, 00:09:28.991 "num_base_bdevs": 3, 00:09:28.991 "num_base_bdevs_discovered": 3, 00:09:28.991 "num_base_bdevs_operational": 3, 00:09:28.991 "base_bdevs_list": [ 00:09:28.991 { 00:09:28.991 "name": "pt1", 00:09:28.991 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:28.991 "is_configured": true, 00:09:28.991 "data_offset": 2048, 00:09:28.991 "data_size": 63488 00:09:28.991 }, 00:09:28.991 { 00:09:28.991 "name": "pt2", 00:09:28.991 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:28.991 "is_configured": true, 00:09:28.991 "data_offset": 2048, 00:09:28.991 "data_size": 63488 00:09:28.991 }, 00:09:28.991 { 00:09:28.991 "name": "pt3", 00:09:28.991 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:28.991 "is_configured": true, 00:09:28.991 "data_offset": 2048, 00:09:28.991 "data_size": 63488 00:09:28.991 } 00:09:28.991 ] 00:09:28.991 }' 00:09:28.991 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.991 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.250 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:29.251 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:29.251 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:29.251 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:29.251 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:29.251 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:29.251 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:29.251 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:29.251 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.251 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.251 [2024-10-09 03:12:12.479643] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:29.251 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.251 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:29.251 "name": "raid_bdev1", 00:09:29.251 "aliases": [ 00:09:29.251 "7f08144e-5914-4a76-806f-adf826833661" 00:09:29.251 ], 00:09:29.251 "product_name": "Raid Volume", 00:09:29.251 "block_size": 512, 00:09:29.251 "num_blocks": 190464, 00:09:29.251 "uuid": "7f08144e-5914-4a76-806f-adf826833661", 00:09:29.251 "assigned_rate_limits": { 00:09:29.251 "rw_ios_per_sec": 0, 00:09:29.251 "rw_mbytes_per_sec": 0, 00:09:29.251 "r_mbytes_per_sec": 0, 00:09:29.251 "w_mbytes_per_sec": 0 00:09:29.251 }, 00:09:29.251 "claimed": false, 00:09:29.251 "zoned": false, 00:09:29.251 "supported_io_types": { 00:09:29.251 "read": true, 00:09:29.251 "write": true, 00:09:29.251 "unmap": true, 00:09:29.251 "flush": true, 00:09:29.251 "reset": true, 00:09:29.251 "nvme_admin": false, 00:09:29.251 "nvme_io": false, 00:09:29.251 "nvme_io_md": false, 00:09:29.251 "write_zeroes": true, 00:09:29.251 "zcopy": false, 00:09:29.251 "get_zone_info": false, 00:09:29.251 "zone_management": false, 00:09:29.251 "zone_append": false, 00:09:29.251 "compare": false, 00:09:29.251 "compare_and_write": false, 00:09:29.251 "abort": false, 00:09:29.251 "seek_hole": false, 00:09:29.251 "seek_data": false, 00:09:29.251 "copy": false, 00:09:29.251 "nvme_iov_md": false 00:09:29.251 }, 00:09:29.251 "memory_domains": [ 00:09:29.251 { 00:09:29.251 "dma_device_id": "system", 00:09:29.251 "dma_device_type": 1 00:09:29.251 }, 00:09:29.251 { 00:09:29.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.251 "dma_device_type": 2 00:09:29.251 }, 00:09:29.251 { 00:09:29.251 "dma_device_id": "system", 00:09:29.251 "dma_device_type": 1 00:09:29.251 }, 00:09:29.251 { 00:09:29.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.251 "dma_device_type": 2 00:09:29.251 }, 00:09:29.251 { 00:09:29.251 "dma_device_id": "system", 00:09:29.251 "dma_device_type": 1 00:09:29.251 }, 00:09:29.251 { 00:09:29.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.251 "dma_device_type": 2 00:09:29.251 } 00:09:29.251 ], 00:09:29.251 "driver_specific": { 00:09:29.251 "raid": { 00:09:29.251 "uuid": "7f08144e-5914-4a76-806f-adf826833661", 00:09:29.251 "strip_size_kb": 64, 00:09:29.251 "state": "online", 00:09:29.251 "raid_level": "raid0", 00:09:29.251 "superblock": true, 00:09:29.251 "num_base_bdevs": 3, 00:09:29.251 "num_base_bdevs_discovered": 3, 00:09:29.251 "num_base_bdevs_operational": 3, 00:09:29.251 "base_bdevs_list": [ 00:09:29.251 { 00:09:29.251 "name": "pt1", 00:09:29.251 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:29.251 "is_configured": true, 00:09:29.251 "data_offset": 2048, 00:09:29.251 "data_size": 63488 00:09:29.251 }, 00:09:29.251 { 00:09:29.251 "name": "pt2", 00:09:29.251 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:29.251 "is_configured": true, 00:09:29.251 "data_offset": 2048, 00:09:29.251 "data_size": 63488 00:09:29.251 }, 00:09:29.251 { 00:09:29.251 "name": "pt3", 00:09:29.251 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:29.251 "is_configured": true, 00:09:29.251 "data_offset": 2048, 00:09:29.251 "data_size": 63488 00:09:29.251 } 00:09:29.251 ] 00:09:29.251 } 00:09:29.251 } 00:09:29.251 }' 00:09:29.251 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:29.511 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:29.511 pt2 00:09:29.511 pt3' 00:09:29.511 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.511 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:29.511 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.511 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:29.511 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.511 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.511 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.511 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.511 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.511 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.511 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.511 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.511 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:29.511 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.511 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.511 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.511 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.511 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.511 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.511 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:29.511 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.511 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.512 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.512 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.512 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.512 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.512 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:29.512 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:29.512 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.512 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.512 [2024-10-09 03:12:12.763080] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:29.512 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.512 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7f08144e-5914-4a76-806f-adf826833661 00:09:29.512 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7f08144e-5914-4a76-806f-adf826833661 ']' 00:09:29.512 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:29.512 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.512 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.512 [2024-10-09 03:12:12.806732] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:29.512 [2024-10-09 03:12:12.806812] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:29.512 [2024-10-09 03:12:12.806918] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:29.512 [2024-10-09 03:12:12.807005] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:29.512 [2024-10-09 03:12:12.807051] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:29.512 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.772 [2024-10-09 03:12:12.946531] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:29.772 [2024-10-09 03:12:12.948707] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:29.772 [2024-10-09 03:12:12.948823] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:09:29.772 [2024-10-09 03:12:12.948891] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:29.772 [2024-10-09 03:12:12.948947] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:29.772 [2024-10-09 03:12:12.948968] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:09:29.772 [2024-10-09 03:12:12.948984] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:29.772 [2024-10-09 03:12:12.948994] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:29.772 request: 00:09:29.772 { 00:09:29.772 "name": "raid_bdev1", 00:09:29.772 "raid_level": "raid0", 00:09:29.772 "base_bdevs": [ 00:09:29.772 "malloc1", 00:09:29.772 "malloc2", 00:09:29.772 "malloc3" 00:09:29.772 ], 00:09:29.772 "strip_size_kb": 64, 00:09:29.772 "superblock": false, 00:09:29.772 "method": "bdev_raid_create", 00:09:29.772 "req_id": 1 00:09:29.772 } 00:09:29.772 Got JSON-RPC error response 00:09:29.772 response: 00:09:29.772 { 00:09:29.772 "code": -17, 00:09:29.772 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:29.772 } 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.772 [2024-10-09 03:12:12.994403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:29.772 [2024-10-09 03:12:12.994492] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:29.772 [2024-10-09 03:12:12.994529] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:09:29.772 [2024-10-09 03:12:12.994554] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:29.772 [2024-10-09 03:12:12.996967] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:29.772 [2024-10-09 03:12:12.997038] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:29.772 [2024-10-09 03:12:12.997133] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:29.772 [2024-10-09 03:12:12.997210] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:29.772 pt1 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:29.772 03:12:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:29.773 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.773 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.773 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.773 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.773 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.773 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:29.773 03:12:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:29.773 03:12:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.773 03:12:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:29.773 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.773 "name": "raid_bdev1", 00:09:29.773 "uuid": "7f08144e-5914-4a76-806f-adf826833661", 00:09:29.773 "strip_size_kb": 64, 00:09:29.773 "state": "configuring", 00:09:29.773 "raid_level": "raid0", 00:09:29.773 "superblock": true, 00:09:29.773 "num_base_bdevs": 3, 00:09:29.773 "num_base_bdevs_discovered": 1, 00:09:29.773 "num_base_bdevs_operational": 3, 00:09:29.773 "base_bdevs_list": [ 00:09:29.773 { 00:09:29.773 "name": "pt1", 00:09:29.773 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:29.773 "is_configured": true, 00:09:29.773 "data_offset": 2048, 00:09:29.773 "data_size": 63488 00:09:29.773 }, 00:09:29.773 { 00:09:29.773 "name": null, 00:09:29.773 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:29.773 "is_configured": false, 00:09:29.773 "data_offset": 2048, 00:09:29.773 "data_size": 63488 00:09:29.773 }, 00:09:29.773 { 00:09:29.773 "name": null, 00:09:29.773 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:29.773 "is_configured": false, 00:09:29.773 "data_offset": 2048, 00:09:29.773 "data_size": 63488 00:09:29.773 } 00:09:29.773 ] 00:09:29.773 }' 00:09:29.773 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.773 03:12:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.342 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:09:30.342 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:30.342 03:12:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.342 03:12:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.342 [2024-10-09 03:12:13.437736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:30.342 [2024-10-09 03:12:13.437907] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.342 [2024-10-09 03:12:13.437941] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:09:30.342 [2024-10-09 03:12:13.437951] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.342 [2024-10-09 03:12:13.438473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.342 [2024-10-09 03:12:13.438492] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:30.342 [2024-10-09 03:12:13.438594] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:30.342 [2024-10-09 03:12:13.438619] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:30.342 pt2 00:09:30.342 03:12:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.342 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:09:30.342 03:12:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.342 03:12:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.342 [2024-10-09 03:12:13.449701] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:09:30.342 03:12:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.343 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:09:30.343 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:30.343 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:30.343 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:30.343 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.343 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.343 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.343 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.343 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.343 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.343 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.343 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:30.343 03:12:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.343 03:12:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.343 03:12:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.343 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.343 "name": "raid_bdev1", 00:09:30.343 "uuid": "7f08144e-5914-4a76-806f-adf826833661", 00:09:30.343 "strip_size_kb": 64, 00:09:30.343 "state": "configuring", 00:09:30.343 "raid_level": "raid0", 00:09:30.343 "superblock": true, 00:09:30.343 "num_base_bdevs": 3, 00:09:30.343 "num_base_bdevs_discovered": 1, 00:09:30.343 "num_base_bdevs_operational": 3, 00:09:30.343 "base_bdevs_list": [ 00:09:30.343 { 00:09:30.343 "name": "pt1", 00:09:30.343 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:30.343 "is_configured": true, 00:09:30.343 "data_offset": 2048, 00:09:30.343 "data_size": 63488 00:09:30.343 }, 00:09:30.343 { 00:09:30.343 "name": null, 00:09:30.343 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:30.343 "is_configured": false, 00:09:30.343 "data_offset": 0, 00:09:30.343 "data_size": 63488 00:09:30.343 }, 00:09:30.343 { 00:09:30.343 "name": null, 00:09:30.343 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:30.343 "is_configured": false, 00:09:30.343 "data_offset": 2048, 00:09:30.343 "data_size": 63488 00:09:30.343 } 00:09:30.343 ] 00:09:30.343 }' 00:09:30.343 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.343 03:12:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.604 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:30.604 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:30.604 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:30.604 03:12:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.604 03:12:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.604 [2024-10-09 03:12:13.852939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:30.604 [2024-10-09 03:12:13.853039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.604 [2024-10-09 03:12:13.853072] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:09:30.604 [2024-10-09 03:12:13.853101] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.604 [2024-10-09 03:12:13.853536] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.604 [2024-10-09 03:12:13.853596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:30.604 [2024-10-09 03:12:13.853693] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:30.604 [2024-10-09 03:12:13.853760] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:30.604 pt2 00:09:30.604 03:12:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.604 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:30.604 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:30.604 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:09:30.604 03:12:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.604 03:12:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.604 [2024-10-09 03:12:13.864958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:09:30.604 [2024-10-09 03:12:13.865038] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:30.604 [2024-10-09 03:12:13.865065] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:09:30.604 [2024-10-09 03:12:13.865094] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:30.604 [2024-10-09 03:12:13.865457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:30.604 [2024-10-09 03:12:13.865517] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:09:30.604 [2024-10-09 03:12:13.865599] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:09:30.604 [2024-10-09 03:12:13.865646] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:09:30.604 [2024-10-09 03:12:13.865786] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:30.604 [2024-10-09 03:12:13.865826] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:30.604 [2024-10-09 03:12:13.866140] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:30.604 [2024-10-09 03:12:13.866309] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:30.604 [2024-10-09 03:12:13.866347] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:30.604 [2024-10-09 03:12:13.866532] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:30.604 pt3 00:09:30.604 03:12:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.604 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:30.604 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:30.604 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:30.604 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:30.604 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:30.604 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:30.604 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:30.604 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:30.604 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:30.604 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:30.604 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:30.604 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:30.604 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.604 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:30.604 03:12:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:30.604 03:12:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.604 03:12:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:30.864 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:30.864 "name": "raid_bdev1", 00:09:30.864 "uuid": "7f08144e-5914-4a76-806f-adf826833661", 00:09:30.864 "strip_size_kb": 64, 00:09:30.864 "state": "online", 00:09:30.864 "raid_level": "raid0", 00:09:30.864 "superblock": true, 00:09:30.864 "num_base_bdevs": 3, 00:09:30.864 "num_base_bdevs_discovered": 3, 00:09:30.864 "num_base_bdevs_operational": 3, 00:09:30.864 "base_bdevs_list": [ 00:09:30.864 { 00:09:30.864 "name": "pt1", 00:09:30.864 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:30.864 "is_configured": true, 00:09:30.864 "data_offset": 2048, 00:09:30.864 "data_size": 63488 00:09:30.864 }, 00:09:30.864 { 00:09:30.864 "name": "pt2", 00:09:30.864 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:30.864 "is_configured": true, 00:09:30.864 "data_offset": 2048, 00:09:30.864 "data_size": 63488 00:09:30.864 }, 00:09:30.864 { 00:09:30.864 "name": "pt3", 00:09:30.864 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:30.864 "is_configured": true, 00:09:30.864 "data_offset": 2048, 00:09:30.864 "data_size": 63488 00:09:30.864 } 00:09:30.864 ] 00:09:30.864 }' 00:09:30.864 03:12:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:30.864 03:12:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.122 03:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:31.122 03:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:31.122 03:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:31.122 03:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:31.122 03:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:31.122 03:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:31.122 03:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:31.122 03:12:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.122 03:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:31.122 03:12:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.122 [2024-10-09 03:12:14.296506] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:31.122 03:12:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.122 03:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:31.122 "name": "raid_bdev1", 00:09:31.122 "aliases": [ 00:09:31.122 "7f08144e-5914-4a76-806f-adf826833661" 00:09:31.122 ], 00:09:31.122 "product_name": "Raid Volume", 00:09:31.122 "block_size": 512, 00:09:31.122 "num_blocks": 190464, 00:09:31.122 "uuid": "7f08144e-5914-4a76-806f-adf826833661", 00:09:31.122 "assigned_rate_limits": { 00:09:31.123 "rw_ios_per_sec": 0, 00:09:31.123 "rw_mbytes_per_sec": 0, 00:09:31.123 "r_mbytes_per_sec": 0, 00:09:31.123 "w_mbytes_per_sec": 0 00:09:31.123 }, 00:09:31.123 "claimed": false, 00:09:31.123 "zoned": false, 00:09:31.123 "supported_io_types": { 00:09:31.123 "read": true, 00:09:31.123 "write": true, 00:09:31.123 "unmap": true, 00:09:31.123 "flush": true, 00:09:31.123 "reset": true, 00:09:31.123 "nvme_admin": false, 00:09:31.123 "nvme_io": false, 00:09:31.123 "nvme_io_md": false, 00:09:31.123 "write_zeroes": true, 00:09:31.123 "zcopy": false, 00:09:31.123 "get_zone_info": false, 00:09:31.123 "zone_management": false, 00:09:31.123 "zone_append": false, 00:09:31.123 "compare": false, 00:09:31.123 "compare_and_write": false, 00:09:31.123 "abort": false, 00:09:31.123 "seek_hole": false, 00:09:31.123 "seek_data": false, 00:09:31.123 "copy": false, 00:09:31.123 "nvme_iov_md": false 00:09:31.123 }, 00:09:31.123 "memory_domains": [ 00:09:31.123 { 00:09:31.123 "dma_device_id": "system", 00:09:31.123 "dma_device_type": 1 00:09:31.123 }, 00:09:31.123 { 00:09:31.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.123 "dma_device_type": 2 00:09:31.123 }, 00:09:31.123 { 00:09:31.123 "dma_device_id": "system", 00:09:31.123 "dma_device_type": 1 00:09:31.123 }, 00:09:31.123 { 00:09:31.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.123 "dma_device_type": 2 00:09:31.123 }, 00:09:31.123 { 00:09:31.123 "dma_device_id": "system", 00:09:31.123 "dma_device_type": 1 00:09:31.123 }, 00:09:31.123 { 00:09:31.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:31.123 "dma_device_type": 2 00:09:31.123 } 00:09:31.123 ], 00:09:31.123 "driver_specific": { 00:09:31.123 "raid": { 00:09:31.123 "uuid": "7f08144e-5914-4a76-806f-adf826833661", 00:09:31.123 "strip_size_kb": 64, 00:09:31.123 "state": "online", 00:09:31.123 "raid_level": "raid0", 00:09:31.123 "superblock": true, 00:09:31.123 "num_base_bdevs": 3, 00:09:31.123 "num_base_bdevs_discovered": 3, 00:09:31.123 "num_base_bdevs_operational": 3, 00:09:31.123 "base_bdevs_list": [ 00:09:31.123 { 00:09:31.123 "name": "pt1", 00:09:31.123 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:31.123 "is_configured": true, 00:09:31.123 "data_offset": 2048, 00:09:31.123 "data_size": 63488 00:09:31.123 }, 00:09:31.123 { 00:09:31.123 "name": "pt2", 00:09:31.123 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:31.123 "is_configured": true, 00:09:31.123 "data_offset": 2048, 00:09:31.123 "data_size": 63488 00:09:31.123 }, 00:09:31.123 { 00:09:31.123 "name": "pt3", 00:09:31.123 "uuid": "00000000-0000-0000-0000-000000000003", 00:09:31.123 "is_configured": true, 00:09:31.123 "data_offset": 2048, 00:09:31.123 "data_size": 63488 00:09:31.123 } 00:09:31.123 ] 00:09:31.123 } 00:09:31.123 } 00:09:31.123 }' 00:09:31.123 03:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:31.123 03:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:31.123 pt2 00:09:31.123 pt3' 00:09:31.123 03:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.382 [2024-10-09 03:12:14.599949] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7f08144e-5914-4a76-806f-adf826833661 '!=' 7f08144e-5914-4a76-806f-adf826833661 ']' 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 65172 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 65172 ']' 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 65172 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65172 00:09:31.382 killing process with pid 65172 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65172' 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 65172 00:09:31.382 [2024-10-09 03:12:14.672809] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:31.382 [2024-10-09 03:12:14.672924] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:31.382 [2024-10-09 03:12:14.672986] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:31.382 [2024-10-09 03:12:14.672999] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:31.382 03:12:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 65172 00:09:31.950 [2024-10-09 03:12:15.000646] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:33.359 03:12:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:33.359 00:09:33.359 real 0m5.476s 00:09:33.359 user 0m7.565s 00:09:33.359 sys 0m1.021s 00:09:33.359 ************************************ 00:09:33.359 END TEST raid_superblock_test 00:09:33.359 ************************************ 00:09:33.359 03:12:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:33.359 03:12:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.359 03:12:16 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:09:33.359 03:12:16 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:33.359 03:12:16 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:33.359 03:12:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:33.359 ************************************ 00:09:33.359 START TEST raid_read_error_test 00:09:33.359 ************************************ 00:09:33.359 03:12:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 read 00:09:33.359 03:12:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:33.359 03:12:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:33.359 03:12:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:33.359 03:12:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:33.359 03:12:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:33.359 03:12:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:33.359 03:12:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:33.359 03:12:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:33.359 03:12:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:33.359 03:12:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:33.359 03:12:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:33.359 03:12:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:33.359 03:12:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:33.359 03:12:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:33.359 03:12:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:33.359 03:12:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:33.359 03:12:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:33.359 03:12:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:33.359 03:12:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:33.359 03:12:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:33.359 03:12:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:33.359 03:12:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:33.359 03:12:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:33.359 03:12:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:33.359 03:12:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:33.359 03:12:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.920GCaaQlu 00:09:33.359 03:12:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65431 00:09:33.359 03:12:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:33.359 03:12:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65431 00:09:33.359 03:12:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 65431 ']' 00:09:33.359 03:12:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:33.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:33.359 03:12:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:33.359 03:12:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:33.359 03:12:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:33.359 03:12:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:33.360 [2024-10-09 03:12:16.563546] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:09:33.360 [2024-10-09 03:12:16.563661] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65431 ] 00:09:33.619 [2024-10-09 03:12:16.725772] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:33.879 [2024-10-09 03:12:16.975655] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:34.139 [2024-10-09 03:12:17.196819] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.139 [2024-10-09 03:12:17.196878] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:34.139 03:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:34.139 03:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:34.139 03:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:34.139 03:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:34.139 03:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.139 03:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.399 BaseBdev1_malloc 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.399 true 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.399 [2024-10-09 03:12:17.507110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:34.399 [2024-10-09 03:12:17.507171] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.399 [2024-10-09 03:12:17.507190] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:34.399 [2024-10-09 03:12:17.507202] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.399 [2024-10-09 03:12:17.509575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.399 [2024-10-09 03:12:17.509619] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:34.399 BaseBdev1 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.399 BaseBdev2_malloc 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.399 true 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.399 [2024-10-09 03:12:17.591690] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:34.399 [2024-10-09 03:12:17.591749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.399 [2024-10-09 03:12:17.591766] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:34.399 [2024-10-09 03:12:17.591778] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.399 [2024-10-09 03:12:17.594165] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.399 [2024-10-09 03:12:17.594204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:34.399 BaseBdev2 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.399 BaseBdev3_malloc 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.399 true 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.399 [2024-10-09 03:12:17.664774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:34.399 [2024-10-09 03:12:17.664881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:34.399 [2024-10-09 03:12:17.664902] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:34.399 [2024-10-09 03:12:17.664915] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:34.399 [2024-10-09 03:12:17.667270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:34.399 [2024-10-09 03:12:17.667307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:34.399 BaseBdev3 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.399 [2024-10-09 03:12:17.676849] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:34.399 [2024-10-09 03:12:17.678914] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:34.399 [2024-10-09 03:12:17.678993] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:34.399 [2024-10-09 03:12:17.679194] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:34.399 [2024-10-09 03:12:17.679214] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:34.399 [2024-10-09 03:12:17.679462] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:34.399 [2024-10-09 03:12:17.679618] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:34.399 [2024-10-09 03:12:17.679630] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:34.399 [2024-10-09 03:12:17.679773] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:34.399 03:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.400 03:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.400 03:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.400 03:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.400 03:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.400 03:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:34.400 03:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:34.400 03:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.659 03:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:34.659 03:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.659 "name": "raid_bdev1", 00:09:34.659 "uuid": "cd7a298d-0a36-44b8-bc62-a74200f4c380", 00:09:34.659 "strip_size_kb": 64, 00:09:34.659 "state": "online", 00:09:34.659 "raid_level": "raid0", 00:09:34.659 "superblock": true, 00:09:34.659 "num_base_bdevs": 3, 00:09:34.659 "num_base_bdevs_discovered": 3, 00:09:34.659 "num_base_bdevs_operational": 3, 00:09:34.659 "base_bdevs_list": [ 00:09:34.659 { 00:09:34.659 "name": "BaseBdev1", 00:09:34.659 "uuid": "ed5e5dba-589a-5be0-a659-7d4074669660", 00:09:34.659 "is_configured": true, 00:09:34.659 "data_offset": 2048, 00:09:34.659 "data_size": 63488 00:09:34.659 }, 00:09:34.659 { 00:09:34.659 "name": "BaseBdev2", 00:09:34.659 "uuid": "da0ad34f-8b5d-5a05-8c72-1af71a779eb1", 00:09:34.659 "is_configured": true, 00:09:34.659 "data_offset": 2048, 00:09:34.659 "data_size": 63488 00:09:34.659 }, 00:09:34.659 { 00:09:34.659 "name": "BaseBdev3", 00:09:34.659 "uuid": "3d96a986-5a2b-5789-bf20-d8745ae3d7c0", 00:09:34.659 "is_configured": true, 00:09:34.659 "data_offset": 2048, 00:09:34.659 "data_size": 63488 00:09:34.659 } 00:09:34.659 ] 00:09:34.659 }' 00:09:34.659 03:12:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.659 03:12:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.918 03:12:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:34.918 03:12:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:34.918 [2024-10-09 03:12:18.185430] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:35.855 03:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:35.855 03:12:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.855 03:12:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.855 03:12:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:35.855 03:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:35.855 03:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:35.855 03:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:35.855 03:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:35.855 03:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:35.855 03:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.855 03:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:35.855 03:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:35.855 03:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:35.855 03:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.855 03:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.855 03:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.855 03:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.855 03:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.855 03:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:35.855 03:12:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:35.855 03:12:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.855 03:12:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.113 03:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:36.113 "name": "raid_bdev1", 00:09:36.113 "uuid": "cd7a298d-0a36-44b8-bc62-a74200f4c380", 00:09:36.113 "strip_size_kb": 64, 00:09:36.113 "state": "online", 00:09:36.113 "raid_level": "raid0", 00:09:36.113 "superblock": true, 00:09:36.113 "num_base_bdevs": 3, 00:09:36.113 "num_base_bdevs_discovered": 3, 00:09:36.113 "num_base_bdevs_operational": 3, 00:09:36.113 "base_bdevs_list": [ 00:09:36.113 { 00:09:36.113 "name": "BaseBdev1", 00:09:36.113 "uuid": "ed5e5dba-589a-5be0-a659-7d4074669660", 00:09:36.113 "is_configured": true, 00:09:36.113 "data_offset": 2048, 00:09:36.113 "data_size": 63488 00:09:36.113 }, 00:09:36.113 { 00:09:36.113 "name": "BaseBdev2", 00:09:36.113 "uuid": "da0ad34f-8b5d-5a05-8c72-1af71a779eb1", 00:09:36.113 "is_configured": true, 00:09:36.113 "data_offset": 2048, 00:09:36.113 "data_size": 63488 00:09:36.113 }, 00:09:36.113 { 00:09:36.113 "name": "BaseBdev3", 00:09:36.113 "uuid": "3d96a986-5a2b-5789-bf20-d8745ae3d7c0", 00:09:36.113 "is_configured": true, 00:09:36.113 "data_offset": 2048, 00:09:36.113 "data_size": 63488 00:09:36.113 } 00:09:36.113 ] 00:09:36.113 }' 00:09:36.113 03:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:36.113 03:12:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.372 03:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:36.372 03:12:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:36.372 03:12:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:36.372 [2024-10-09 03:12:19.557869] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:36.372 [2024-10-09 03:12:19.557913] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:36.372 [2024-10-09 03:12:19.560454] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:36.372 [2024-10-09 03:12:19.560505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:36.372 [2024-10-09 03:12:19.560546] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:36.372 [2024-10-09 03:12:19.560556] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:36.372 { 00:09:36.372 "results": [ 00:09:36.372 { 00:09:36.372 "job": "raid_bdev1", 00:09:36.372 "core_mask": "0x1", 00:09:36.372 "workload": "randrw", 00:09:36.372 "percentage": 50, 00:09:36.372 "status": "finished", 00:09:36.372 "queue_depth": 1, 00:09:36.372 "io_size": 131072, 00:09:36.372 "runtime": 1.373109, 00:09:36.372 "iops": 13739.623001524278, 00:09:36.372 "mibps": 1717.4528751905348, 00:09:36.372 "io_failed": 1, 00:09:36.372 "io_timeout": 0, 00:09:36.372 "avg_latency_us": 102.54314348914014, 00:09:36.372 "min_latency_us": 25.823580786026202, 00:09:36.372 "max_latency_us": 1502.46288209607 00:09:36.373 } 00:09:36.373 ], 00:09:36.373 "core_count": 1 00:09:36.373 } 00:09:36.373 03:12:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:36.373 03:12:19 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65431 00:09:36.373 03:12:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 65431 ']' 00:09:36.373 03:12:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 65431 00:09:36.373 03:12:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:09:36.373 03:12:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:36.373 03:12:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65431 00:09:36.373 killing process with pid 65431 00:09:36.373 03:12:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:36.373 03:12:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:36.373 03:12:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65431' 00:09:36.373 03:12:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 65431 00:09:36.373 [2024-10-09 03:12:19.611959] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:36.373 03:12:19 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 65431 00:09:36.631 [2024-10-09 03:12:19.861028] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:38.010 03:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.920GCaaQlu 00:09:38.010 03:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:38.010 03:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:38.010 03:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:09:38.010 03:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:38.010 03:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:38.010 03:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:38.010 ************************************ 00:09:38.010 END TEST raid_read_error_test 00:09:38.010 ************************************ 00:09:38.010 03:12:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:09:38.010 00:09:38.010 real 0m4.839s 00:09:38.010 user 0m5.595s 00:09:38.010 sys 0m0.684s 00:09:38.010 03:12:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.010 03:12:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.275 03:12:21 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:09:38.275 03:12:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:38.275 03:12:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:38.275 03:12:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:38.275 ************************************ 00:09:38.275 START TEST raid_write_error_test 00:09:38.275 ************************************ 00:09:38.275 03:12:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 3 write 00:09:38.275 03:12:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:09:38.275 03:12:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:09:38.275 03:12:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:38.275 03:12:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:38.275 03:12:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:38.275 03:12:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:38.275 03:12:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:38.275 03:12:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:38.275 03:12:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:38.275 03:12:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:38.275 03:12:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:38.275 03:12:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:09:38.275 03:12:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:38.275 03:12:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:38.275 03:12:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:38.275 03:12:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:38.275 03:12:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:38.275 03:12:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:38.275 03:12:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:38.275 03:12:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:38.275 03:12:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:38.275 03:12:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:09:38.275 03:12:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:38.275 03:12:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:38.275 03:12:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:38.275 03:12:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.UykEwm8xkQ 00:09:38.275 03:12:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65577 00:09:38.275 03:12:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:38.275 03:12:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65577 00:09:38.275 03:12:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 65577 ']' 00:09:38.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.276 03:12:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.276 03:12:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:38.276 03:12:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.276 03:12:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:38.276 03:12:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.276 [2024-10-09 03:12:21.473633] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:09:38.276 [2024-10-09 03:12:21.473763] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65577 ] 00:09:38.542 [2024-10-09 03:12:21.638350] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:38.801 [2024-10-09 03:12:21.903741] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:39.062 [2024-10-09 03:12:22.143505] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.062 [2024-10-09 03:12:22.143546] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:39.062 03:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:39.062 03:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:09:39.062 03:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:39.062 03:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:39.062 03:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.062 03:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.062 BaseBdev1_malloc 00:09:39.062 03:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.062 03:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:39.062 03:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.062 03:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.322 true 00:09:39.322 03:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.322 03:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:39.322 03:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.322 03:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.322 [2024-10-09 03:12:22.376249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:39.322 [2024-10-09 03:12:22.376350] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.322 [2024-10-09 03:12:22.376401] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:39.322 [2024-10-09 03:12:22.376433] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.322 [2024-10-09 03:12:22.378866] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.322 [2024-10-09 03:12:22.378937] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:39.322 BaseBdev1 00:09:39.322 03:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.322 03:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:39.322 03:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:39.322 03:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.323 BaseBdev2_malloc 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.323 true 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.323 [2024-10-09 03:12:22.460262] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:39.323 [2024-10-09 03:12:22.460317] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.323 [2024-10-09 03:12:22.460334] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:39.323 [2024-10-09 03:12:22.460345] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.323 [2024-10-09 03:12:22.462721] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.323 [2024-10-09 03:12:22.462760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:39.323 BaseBdev2 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.323 BaseBdev3_malloc 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.323 true 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.323 [2024-10-09 03:12:22.534176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:09:39.323 [2024-10-09 03:12:22.534228] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.323 [2024-10-09 03:12:22.534260] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:39.323 [2024-10-09 03:12:22.534272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.323 [2024-10-09 03:12:22.536589] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.323 [2024-10-09 03:12:22.536627] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:09:39.323 BaseBdev3 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.323 [2024-10-09 03:12:22.546232] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:39.323 [2024-10-09 03:12:22.548249] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:39.323 [2024-10-09 03:12:22.548384] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:39.323 [2024-10-09 03:12:22.548582] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:39.323 [2024-10-09 03:12:22.548595] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:39.323 [2024-10-09 03:12:22.548857] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:39.323 [2024-10-09 03:12:22.549041] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:39.323 [2024-10-09 03:12:22.549054] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:39.323 [2024-10-09 03:12:22.549206] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.323 "name": "raid_bdev1", 00:09:39.323 "uuid": "9a3157a5-8fc8-4b58-aea1-7412e325d04d", 00:09:39.323 "strip_size_kb": 64, 00:09:39.323 "state": "online", 00:09:39.323 "raid_level": "raid0", 00:09:39.323 "superblock": true, 00:09:39.323 "num_base_bdevs": 3, 00:09:39.323 "num_base_bdevs_discovered": 3, 00:09:39.323 "num_base_bdevs_operational": 3, 00:09:39.323 "base_bdevs_list": [ 00:09:39.323 { 00:09:39.323 "name": "BaseBdev1", 00:09:39.323 "uuid": "98b6c55d-2b9f-5c02-b3f3-0200ad034c23", 00:09:39.323 "is_configured": true, 00:09:39.323 "data_offset": 2048, 00:09:39.323 "data_size": 63488 00:09:39.323 }, 00:09:39.323 { 00:09:39.323 "name": "BaseBdev2", 00:09:39.323 "uuid": "11ff708b-994a-50d8-9582-26314410b8ae", 00:09:39.323 "is_configured": true, 00:09:39.323 "data_offset": 2048, 00:09:39.323 "data_size": 63488 00:09:39.323 }, 00:09:39.323 { 00:09:39.323 "name": "BaseBdev3", 00:09:39.323 "uuid": "cd5872a8-8072-50a8-a114-3dc24261b34f", 00:09:39.323 "is_configured": true, 00:09:39.323 "data_offset": 2048, 00:09:39.323 "data_size": 63488 00:09:39.323 } 00:09:39.323 ] 00:09:39.323 }' 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.323 03:12:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.893 03:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:39.893 03:12:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:39.893 [2024-10-09 03:12:23.054812] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:40.832 03:12:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:40.832 03:12:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.832 03:12:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.832 03:12:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.832 03:12:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:40.832 03:12:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:09:40.832 03:12:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:09:40.832 03:12:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:09:40.832 03:12:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:40.832 03:12:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:40.832 03:12:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:40.832 03:12:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:40.832 03:12:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:40.832 03:12:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.832 03:12:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.832 03:12:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.832 03:12:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.832 03:12:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.832 03:12:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:40.832 03:12:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:40.832 03:12:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.832 03:12:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:40.832 03:12:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.832 "name": "raid_bdev1", 00:09:40.832 "uuid": "9a3157a5-8fc8-4b58-aea1-7412e325d04d", 00:09:40.832 "strip_size_kb": 64, 00:09:40.832 "state": "online", 00:09:40.832 "raid_level": "raid0", 00:09:40.832 "superblock": true, 00:09:40.832 "num_base_bdevs": 3, 00:09:40.832 "num_base_bdevs_discovered": 3, 00:09:40.832 "num_base_bdevs_operational": 3, 00:09:40.832 "base_bdevs_list": [ 00:09:40.832 { 00:09:40.832 "name": "BaseBdev1", 00:09:40.832 "uuid": "98b6c55d-2b9f-5c02-b3f3-0200ad034c23", 00:09:40.832 "is_configured": true, 00:09:40.832 "data_offset": 2048, 00:09:40.832 "data_size": 63488 00:09:40.832 }, 00:09:40.832 { 00:09:40.832 "name": "BaseBdev2", 00:09:40.832 "uuid": "11ff708b-994a-50d8-9582-26314410b8ae", 00:09:40.832 "is_configured": true, 00:09:40.832 "data_offset": 2048, 00:09:40.832 "data_size": 63488 00:09:40.832 }, 00:09:40.832 { 00:09:40.832 "name": "BaseBdev3", 00:09:40.832 "uuid": "cd5872a8-8072-50a8-a114-3dc24261b34f", 00:09:40.832 "is_configured": true, 00:09:40.832 "data_offset": 2048, 00:09:40.832 "data_size": 63488 00:09:40.832 } 00:09:40.832 ] 00:09:40.832 }' 00:09:40.832 03:12:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.832 03:12:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.401 03:12:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:41.401 03:12:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:41.401 03:12:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.401 [2024-10-09 03:12:24.411406] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:41.401 [2024-10-09 03:12:24.411450] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:41.401 [2024-10-09 03:12:24.414209] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:41.401 [2024-10-09 03:12:24.414288] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.401 [2024-10-09 03:12:24.414347] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:41.401 [2024-10-09 03:12:24.414386] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:41.401 { 00:09:41.401 "results": [ 00:09:41.401 { 00:09:41.401 "job": "raid_bdev1", 00:09:41.401 "core_mask": "0x1", 00:09:41.401 "workload": "randrw", 00:09:41.401 "percentage": 50, 00:09:41.401 "status": "finished", 00:09:41.401 "queue_depth": 1, 00:09:41.401 "io_size": 131072, 00:09:41.401 "runtime": 1.357108, 00:09:41.401 "iops": 14137.41573994111, 00:09:41.401 "mibps": 1767.1769674926388, 00:09:41.401 "io_failed": 1, 00:09:41.401 "io_timeout": 0, 00:09:41.401 "avg_latency_us": 99.5693148312984, 00:09:41.401 "min_latency_us": 26.047161572052403, 00:09:41.401 "max_latency_us": 1380.8349344978167 00:09:41.401 } 00:09:41.401 ], 00:09:41.401 "core_count": 1 00:09:41.401 } 00:09:41.401 03:12:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:41.401 03:12:24 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65577 00:09:41.401 03:12:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 65577 ']' 00:09:41.401 03:12:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 65577 00:09:41.401 03:12:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:09:41.401 03:12:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:41.401 03:12:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65577 00:09:41.401 03:12:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:41.401 03:12:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:41.401 03:12:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65577' 00:09:41.401 killing process with pid 65577 00:09:41.401 03:12:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 65577 00:09:41.401 [2024-10-09 03:12:24.460193] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:41.401 03:12:24 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 65577 00:09:41.661 [2024-10-09 03:12:24.714020] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:43.041 03:12:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:43.041 03:12:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.UykEwm8xkQ 00:09:43.041 03:12:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:43.041 03:12:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:43.041 03:12:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:43.041 03:12:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:43.041 03:12:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:43.041 03:12:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:43.041 00:09:43.041 real 0m4.803s 00:09:43.041 user 0m5.481s 00:09:43.041 sys 0m0.682s 00:09:43.041 03:12:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:43.041 03:12:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.041 ************************************ 00:09:43.041 END TEST raid_write_error_test 00:09:43.041 ************************************ 00:09:43.041 03:12:26 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:43.041 03:12:26 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:09:43.041 03:12:26 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:43.041 03:12:26 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:43.041 03:12:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:43.041 ************************************ 00:09:43.041 START TEST raid_state_function_test 00:09:43.041 ************************************ 00:09:43.041 03:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 false 00:09:43.041 03:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:43.041 03:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:43.041 03:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:43.041 03:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:43.041 03:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:43.041 03:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:43.041 03:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:43.041 03:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:43.042 03:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:43.042 03:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:43.042 03:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:43.042 03:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:43.042 03:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:43.042 03:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:43.042 03:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:43.042 03:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:43.042 03:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:43.042 03:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:43.042 03:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:43.042 03:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:43.042 03:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:43.042 03:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:43.042 03:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:43.042 03:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:43.042 03:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:43.042 03:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:43.042 03:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65726 00:09:43.042 03:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:43.042 03:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65726' 00:09:43.042 Process raid pid: 65726 00:09:43.042 03:12:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65726 00:09:43.042 03:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 65726 ']' 00:09:43.042 03:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.042 03:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:43.042 03:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.042 03:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:43.042 03:12:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:43.042 [2024-10-09 03:12:26.336906] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:09:43.042 [2024-10-09 03:12:26.337110] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.302 [2024-10-09 03:12:26.487366] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.561 [2024-10-09 03:12:26.744182] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.821 [2024-10-09 03:12:26.988191] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:43.821 [2024-10-09 03:12:26.988345] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.080 03:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:44.080 03:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:09:44.080 03:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:44.080 03:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.080 03:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.080 [2024-10-09 03:12:27.178751] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:44.080 [2024-10-09 03:12:27.178851] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:44.080 [2024-10-09 03:12:27.178868] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:44.080 [2024-10-09 03:12:27.178880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:44.080 [2024-10-09 03:12:27.178887] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:44.080 [2024-10-09 03:12:27.178896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:44.080 03:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.080 03:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:44.080 03:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.080 03:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.080 03:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.080 03:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.080 03:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.080 03:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.080 03:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.080 03:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.080 03:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.080 03:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.080 03:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.080 03:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.080 03:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.080 03:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.080 03:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.080 "name": "Existed_Raid", 00:09:44.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.080 "strip_size_kb": 64, 00:09:44.080 "state": "configuring", 00:09:44.081 "raid_level": "concat", 00:09:44.081 "superblock": false, 00:09:44.081 "num_base_bdevs": 3, 00:09:44.081 "num_base_bdevs_discovered": 0, 00:09:44.081 "num_base_bdevs_operational": 3, 00:09:44.081 "base_bdevs_list": [ 00:09:44.081 { 00:09:44.081 "name": "BaseBdev1", 00:09:44.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.081 "is_configured": false, 00:09:44.081 "data_offset": 0, 00:09:44.081 "data_size": 0 00:09:44.081 }, 00:09:44.081 { 00:09:44.081 "name": "BaseBdev2", 00:09:44.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.081 "is_configured": false, 00:09:44.081 "data_offset": 0, 00:09:44.081 "data_size": 0 00:09:44.081 }, 00:09:44.081 { 00:09:44.081 "name": "BaseBdev3", 00:09:44.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.081 "is_configured": false, 00:09:44.081 "data_offset": 0, 00:09:44.081 "data_size": 0 00:09:44.081 } 00:09:44.081 ] 00:09:44.081 }' 00:09:44.081 03:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.081 03:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.342 03:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:44.342 03:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.342 03:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.342 [2024-10-09 03:12:27.625820] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:44.342 [2024-10-09 03:12:27.625941] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:44.342 03:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.342 03:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:44.342 03:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.342 03:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.342 [2024-10-09 03:12:27.637805] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:44.342 [2024-10-09 03:12:27.637913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:44.342 [2024-10-09 03:12:27.637944] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:44.342 [2024-10-09 03:12:27.637967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:44.342 [2024-10-09 03:12:27.637986] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:44.342 [2024-10-09 03:12:27.638007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:44.342 03:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.342 03:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:44.342 03:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.342 03:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.602 [2024-10-09 03:12:27.701057] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:44.602 BaseBdev1 00:09:44.602 03:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.602 03:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:44.602 03:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:44.602 03:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:44.602 03:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:44.602 03:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:44.602 03:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:44.602 03:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:44.602 03:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.602 03:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.602 03:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.602 03:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:44.602 03:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.602 03:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.602 [ 00:09:44.602 { 00:09:44.602 "name": "BaseBdev1", 00:09:44.602 "aliases": [ 00:09:44.602 "038757fa-597a-4079-b158-9ba8e4bf0335" 00:09:44.602 ], 00:09:44.602 "product_name": "Malloc disk", 00:09:44.602 "block_size": 512, 00:09:44.602 "num_blocks": 65536, 00:09:44.602 "uuid": "038757fa-597a-4079-b158-9ba8e4bf0335", 00:09:44.602 "assigned_rate_limits": { 00:09:44.602 "rw_ios_per_sec": 0, 00:09:44.602 "rw_mbytes_per_sec": 0, 00:09:44.602 "r_mbytes_per_sec": 0, 00:09:44.602 "w_mbytes_per_sec": 0 00:09:44.602 }, 00:09:44.602 "claimed": true, 00:09:44.602 "claim_type": "exclusive_write", 00:09:44.602 "zoned": false, 00:09:44.602 "supported_io_types": { 00:09:44.602 "read": true, 00:09:44.602 "write": true, 00:09:44.602 "unmap": true, 00:09:44.602 "flush": true, 00:09:44.602 "reset": true, 00:09:44.602 "nvme_admin": false, 00:09:44.602 "nvme_io": false, 00:09:44.602 "nvme_io_md": false, 00:09:44.602 "write_zeroes": true, 00:09:44.602 "zcopy": true, 00:09:44.602 "get_zone_info": false, 00:09:44.602 "zone_management": false, 00:09:44.602 "zone_append": false, 00:09:44.602 "compare": false, 00:09:44.602 "compare_and_write": false, 00:09:44.602 "abort": true, 00:09:44.602 "seek_hole": false, 00:09:44.602 "seek_data": false, 00:09:44.602 "copy": true, 00:09:44.602 "nvme_iov_md": false 00:09:44.602 }, 00:09:44.602 "memory_domains": [ 00:09:44.602 { 00:09:44.602 "dma_device_id": "system", 00:09:44.602 "dma_device_type": 1 00:09:44.602 }, 00:09:44.602 { 00:09:44.602 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:44.602 "dma_device_type": 2 00:09:44.602 } 00:09:44.602 ], 00:09:44.602 "driver_specific": {} 00:09:44.602 } 00:09:44.602 ] 00:09:44.602 03:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.602 03:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:44.602 03:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:44.602 03:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:44.602 03:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:44.602 03:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:44.602 03:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:44.602 03:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:44.602 03:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:44.602 03:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:44.602 03:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:44.602 03:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:44.602 03:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:44.602 03:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.602 03:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.602 03:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.602 03:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.602 03:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:44.602 "name": "Existed_Raid", 00:09:44.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.602 "strip_size_kb": 64, 00:09:44.602 "state": "configuring", 00:09:44.602 "raid_level": "concat", 00:09:44.602 "superblock": false, 00:09:44.602 "num_base_bdevs": 3, 00:09:44.602 "num_base_bdevs_discovered": 1, 00:09:44.602 "num_base_bdevs_operational": 3, 00:09:44.602 "base_bdevs_list": [ 00:09:44.602 { 00:09:44.602 "name": "BaseBdev1", 00:09:44.602 "uuid": "038757fa-597a-4079-b158-9ba8e4bf0335", 00:09:44.602 "is_configured": true, 00:09:44.602 "data_offset": 0, 00:09:44.602 "data_size": 65536 00:09:44.602 }, 00:09:44.602 { 00:09:44.602 "name": "BaseBdev2", 00:09:44.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.603 "is_configured": false, 00:09:44.603 "data_offset": 0, 00:09:44.603 "data_size": 0 00:09:44.603 }, 00:09:44.603 { 00:09:44.603 "name": "BaseBdev3", 00:09:44.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.603 "is_configured": false, 00:09:44.603 "data_offset": 0, 00:09:44.603 "data_size": 0 00:09:44.603 } 00:09:44.603 ] 00:09:44.603 }' 00:09:44.603 03:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:44.603 03:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.171 03:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:45.171 03:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.171 03:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.171 [2024-10-09 03:12:28.204197] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:45.171 [2024-10-09 03:12:28.204237] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:45.171 03:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.171 03:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:45.171 03:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.171 03:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.171 [2024-10-09 03:12:28.216221] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:45.171 [2024-10-09 03:12:28.218264] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:45.171 [2024-10-09 03:12:28.218354] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:45.171 [2024-10-09 03:12:28.218368] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:45.171 [2024-10-09 03:12:28.218377] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:45.171 03:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.171 03:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:45.171 03:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:45.171 03:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:45.171 03:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.171 03:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.171 03:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.171 03:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.171 03:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.171 03:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.171 03:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.171 03:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.171 03:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.171 03:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.171 03:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.171 03:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.171 03:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.171 03:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.171 03:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.171 "name": "Existed_Raid", 00:09:45.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.171 "strip_size_kb": 64, 00:09:45.171 "state": "configuring", 00:09:45.171 "raid_level": "concat", 00:09:45.171 "superblock": false, 00:09:45.171 "num_base_bdevs": 3, 00:09:45.171 "num_base_bdevs_discovered": 1, 00:09:45.171 "num_base_bdevs_operational": 3, 00:09:45.171 "base_bdevs_list": [ 00:09:45.171 { 00:09:45.171 "name": "BaseBdev1", 00:09:45.171 "uuid": "038757fa-597a-4079-b158-9ba8e4bf0335", 00:09:45.172 "is_configured": true, 00:09:45.172 "data_offset": 0, 00:09:45.172 "data_size": 65536 00:09:45.172 }, 00:09:45.172 { 00:09:45.172 "name": "BaseBdev2", 00:09:45.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.172 "is_configured": false, 00:09:45.172 "data_offset": 0, 00:09:45.172 "data_size": 0 00:09:45.172 }, 00:09:45.172 { 00:09:45.172 "name": "BaseBdev3", 00:09:45.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.172 "is_configured": false, 00:09:45.172 "data_offset": 0, 00:09:45.172 "data_size": 0 00:09:45.172 } 00:09:45.172 ] 00:09:45.172 }' 00:09:45.172 03:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.172 03:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.431 03:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:45.431 03:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.431 03:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.431 [2024-10-09 03:12:28.721587] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:45.431 BaseBdev2 00:09:45.431 03:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.431 03:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:45.431 03:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:45.431 03:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:45.431 03:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:45.431 03:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:45.431 03:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:45.431 03:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:45.431 03:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.431 03:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.691 03:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.691 03:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:45.691 03:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.691 03:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.691 [ 00:09:45.691 { 00:09:45.691 "name": "BaseBdev2", 00:09:45.691 "aliases": [ 00:09:45.691 "609c2d39-87be-4329-8deb-1aca777d4f8f" 00:09:45.691 ], 00:09:45.691 "product_name": "Malloc disk", 00:09:45.691 "block_size": 512, 00:09:45.691 "num_blocks": 65536, 00:09:45.691 "uuid": "609c2d39-87be-4329-8deb-1aca777d4f8f", 00:09:45.691 "assigned_rate_limits": { 00:09:45.691 "rw_ios_per_sec": 0, 00:09:45.691 "rw_mbytes_per_sec": 0, 00:09:45.691 "r_mbytes_per_sec": 0, 00:09:45.691 "w_mbytes_per_sec": 0 00:09:45.691 }, 00:09:45.691 "claimed": true, 00:09:45.691 "claim_type": "exclusive_write", 00:09:45.691 "zoned": false, 00:09:45.691 "supported_io_types": { 00:09:45.691 "read": true, 00:09:45.691 "write": true, 00:09:45.691 "unmap": true, 00:09:45.691 "flush": true, 00:09:45.691 "reset": true, 00:09:45.691 "nvme_admin": false, 00:09:45.691 "nvme_io": false, 00:09:45.691 "nvme_io_md": false, 00:09:45.691 "write_zeroes": true, 00:09:45.691 "zcopy": true, 00:09:45.691 "get_zone_info": false, 00:09:45.691 "zone_management": false, 00:09:45.691 "zone_append": false, 00:09:45.691 "compare": false, 00:09:45.691 "compare_and_write": false, 00:09:45.691 "abort": true, 00:09:45.691 "seek_hole": false, 00:09:45.691 "seek_data": false, 00:09:45.691 "copy": true, 00:09:45.691 "nvme_iov_md": false 00:09:45.691 }, 00:09:45.691 "memory_domains": [ 00:09:45.691 { 00:09:45.691 "dma_device_id": "system", 00:09:45.691 "dma_device_type": 1 00:09:45.691 }, 00:09:45.691 { 00:09:45.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:45.691 "dma_device_type": 2 00:09:45.691 } 00:09:45.691 ], 00:09:45.691 "driver_specific": {} 00:09:45.691 } 00:09:45.691 ] 00:09:45.691 03:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.691 03:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:45.691 03:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:45.691 03:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:45.691 03:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:45.691 03:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:45.691 03:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:45.691 03:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:45.691 03:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:45.691 03:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:45.691 03:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.691 03:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.691 03:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.691 03:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.691 03:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.691 03:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:45.691 03:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.691 03:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.691 03:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:45.691 03:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.691 "name": "Existed_Raid", 00:09:45.691 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.691 "strip_size_kb": 64, 00:09:45.691 "state": "configuring", 00:09:45.691 "raid_level": "concat", 00:09:45.691 "superblock": false, 00:09:45.691 "num_base_bdevs": 3, 00:09:45.691 "num_base_bdevs_discovered": 2, 00:09:45.691 "num_base_bdevs_operational": 3, 00:09:45.691 "base_bdevs_list": [ 00:09:45.691 { 00:09:45.691 "name": "BaseBdev1", 00:09:45.691 "uuid": "038757fa-597a-4079-b158-9ba8e4bf0335", 00:09:45.691 "is_configured": true, 00:09:45.691 "data_offset": 0, 00:09:45.691 "data_size": 65536 00:09:45.691 }, 00:09:45.691 { 00:09:45.691 "name": "BaseBdev2", 00:09:45.692 "uuid": "609c2d39-87be-4329-8deb-1aca777d4f8f", 00:09:45.692 "is_configured": true, 00:09:45.692 "data_offset": 0, 00:09:45.692 "data_size": 65536 00:09:45.692 }, 00:09:45.692 { 00:09:45.692 "name": "BaseBdev3", 00:09:45.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:45.692 "is_configured": false, 00:09:45.692 "data_offset": 0, 00:09:45.692 "data_size": 0 00:09:45.692 } 00:09:45.692 ] 00:09:45.692 }' 00:09:45.692 03:12:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.692 03:12:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.951 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:45.951 03:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:45.951 03:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.210 [2024-10-09 03:12:29.264006] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:46.210 [2024-10-09 03:12:29.264136] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:46.210 [2024-10-09 03:12:29.264156] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:46.210 [2024-10-09 03:12:29.264468] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:46.210 [2024-10-09 03:12:29.264667] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:46.210 [2024-10-09 03:12:29.264677] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:46.210 [2024-10-09 03:12:29.264982] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:46.210 BaseBdev3 00:09:46.210 03:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.210 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:46.210 03:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:46.210 03:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:46.210 03:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:46.210 03:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:46.210 03:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:46.210 03:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:46.210 03:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.210 03:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.210 03:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.210 03:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:46.210 03:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.210 03:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.210 [ 00:09:46.210 { 00:09:46.210 "name": "BaseBdev3", 00:09:46.210 "aliases": [ 00:09:46.210 "603af2b6-04a6-4671-aed0-b117bd2c242b" 00:09:46.210 ], 00:09:46.210 "product_name": "Malloc disk", 00:09:46.210 "block_size": 512, 00:09:46.210 "num_blocks": 65536, 00:09:46.210 "uuid": "603af2b6-04a6-4671-aed0-b117bd2c242b", 00:09:46.210 "assigned_rate_limits": { 00:09:46.210 "rw_ios_per_sec": 0, 00:09:46.210 "rw_mbytes_per_sec": 0, 00:09:46.210 "r_mbytes_per_sec": 0, 00:09:46.210 "w_mbytes_per_sec": 0 00:09:46.210 }, 00:09:46.210 "claimed": true, 00:09:46.210 "claim_type": "exclusive_write", 00:09:46.210 "zoned": false, 00:09:46.210 "supported_io_types": { 00:09:46.210 "read": true, 00:09:46.210 "write": true, 00:09:46.210 "unmap": true, 00:09:46.210 "flush": true, 00:09:46.210 "reset": true, 00:09:46.210 "nvme_admin": false, 00:09:46.210 "nvme_io": false, 00:09:46.210 "nvme_io_md": false, 00:09:46.210 "write_zeroes": true, 00:09:46.210 "zcopy": true, 00:09:46.210 "get_zone_info": false, 00:09:46.210 "zone_management": false, 00:09:46.210 "zone_append": false, 00:09:46.210 "compare": false, 00:09:46.210 "compare_and_write": false, 00:09:46.210 "abort": true, 00:09:46.210 "seek_hole": false, 00:09:46.210 "seek_data": false, 00:09:46.210 "copy": true, 00:09:46.210 "nvme_iov_md": false 00:09:46.210 }, 00:09:46.210 "memory_domains": [ 00:09:46.210 { 00:09:46.210 "dma_device_id": "system", 00:09:46.210 "dma_device_type": 1 00:09:46.210 }, 00:09:46.210 { 00:09:46.210 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.210 "dma_device_type": 2 00:09:46.210 } 00:09:46.210 ], 00:09:46.210 "driver_specific": {} 00:09:46.210 } 00:09:46.210 ] 00:09:46.210 03:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.210 03:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:46.210 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:46.210 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:46.210 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:46.210 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.210 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.210 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:46.210 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.210 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:46.210 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.210 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.210 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.210 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.210 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.210 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.210 03:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.210 03:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.210 03:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.210 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.210 "name": "Existed_Raid", 00:09:46.210 "uuid": "e730db9b-0f98-4282-b570-20b82ec78bd0", 00:09:46.210 "strip_size_kb": 64, 00:09:46.210 "state": "online", 00:09:46.210 "raid_level": "concat", 00:09:46.210 "superblock": false, 00:09:46.210 "num_base_bdevs": 3, 00:09:46.210 "num_base_bdevs_discovered": 3, 00:09:46.210 "num_base_bdevs_operational": 3, 00:09:46.210 "base_bdevs_list": [ 00:09:46.210 { 00:09:46.210 "name": "BaseBdev1", 00:09:46.210 "uuid": "038757fa-597a-4079-b158-9ba8e4bf0335", 00:09:46.210 "is_configured": true, 00:09:46.210 "data_offset": 0, 00:09:46.210 "data_size": 65536 00:09:46.210 }, 00:09:46.210 { 00:09:46.210 "name": "BaseBdev2", 00:09:46.210 "uuid": "609c2d39-87be-4329-8deb-1aca777d4f8f", 00:09:46.210 "is_configured": true, 00:09:46.210 "data_offset": 0, 00:09:46.210 "data_size": 65536 00:09:46.210 }, 00:09:46.210 { 00:09:46.210 "name": "BaseBdev3", 00:09:46.210 "uuid": "603af2b6-04a6-4671-aed0-b117bd2c242b", 00:09:46.210 "is_configured": true, 00:09:46.210 "data_offset": 0, 00:09:46.210 "data_size": 65536 00:09:46.210 } 00:09:46.210 ] 00:09:46.210 }' 00:09:46.210 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.210 03:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.469 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:46.469 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:46.469 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:46.469 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:46.469 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:46.469 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:46.469 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:46.469 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:46.469 03:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.469 03:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.469 [2024-10-09 03:12:29.743580] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:46.469 03:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.728 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:46.728 "name": "Existed_Raid", 00:09:46.728 "aliases": [ 00:09:46.728 "e730db9b-0f98-4282-b570-20b82ec78bd0" 00:09:46.728 ], 00:09:46.728 "product_name": "Raid Volume", 00:09:46.728 "block_size": 512, 00:09:46.728 "num_blocks": 196608, 00:09:46.728 "uuid": "e730db9b-0f98-4282-b570-20b82ec78bd0", 00:09:46.728 "assigned_rate_limits": { 00:09:46.728 "rw_ios_per_sec": 0, 00:09:46.728 "rw_mbytes_per_sec": 0, 00:09:46.728 "r_mbytes_per_sec": 0, 00:09:46.728 "w_mbytes_per_sec": 0 00:09:46.728 }, 00:09:46.728 "claimed": false, 00:09:46.728 "zoned": false, 00:09:46.728 "supported_io_types": { 00:09:46.728 "read": true, 00:09:46.728 "write": true, 00:09:46.728 "unmap": true, 00:09:46.728 "flush": true, 00:09:46.728 "reset": true, 00:09:46.728 "nvme_admin": false, 00:09:46.728 "nvme_io": false, 00:09:46.728 "nvme_io_md": false, 00:09:46.728 "write_zeroes": true, 00:09:46.728 "zcopy": false, 00:09:46.728 "get_zone_info": false, 00:09:46.728 "zone_management": false, 00:09:46.728 "zone_append": false, 00:09:46.728 "compare": false, 00:09:46.728 "compare_and_write": false, 00:09:46.728 "abort": false, 00:09:46.728 "seek_hole": false, 00:09:46.728 "seek_data": false, 00:09:46.728 "copy": false, 00:09:46.728 "nvme_iov_md": false 00:09:46.728 }, 00:09:46.728 "memory_domains": [ 00:09:46.728 { 00:09:46.728 "dma_device_id": "system", 00:09:46.728 "dma_device_type": 1 00:09:46.728 }, 00:09:46.728 { 00:09:46.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.728 "dma_device_type": 2 00:09:46.728 }, 00:09:46.728 { 00:09:46.728 "dma_device_id": "system", 00:09:46.728 "dma_device_type": 1 00:09:46.728 }, 00:09:46.728 { 00:09:46.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.728 "dma_device_type": 2 00:09:46.728 }, 00:09:46.728 { 00:09:46.728 "dma_device_id": "system", 00:09:46.729 "dma_device_type": 1 00:09:46.729 }, 00:09:46.729 { 00:09:46.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.729 "dma_device_type": 2 00:09:46.729 } 00:09:46.729 ], 00:09:46.729 "driver_specific": { 00:09:46.729 "raid": { 00:09:46.729 "uuid": "e730db9b-0f98-4282-b570-20b82ec78bd0", 00:09:46.729 "strip_size_kb": 64, 00:09:46.729 "state": "online", 00:09:46.729 "raid_level": "concat", 00:09:46.729 "superblock": false, 00:09:46.729 "num_base_bdevs": 3, 00:09:46.729 "num_base_bdevs_discovered": 3, 00:09:46.729 "num_base_bdevs_operational": 3, 00:09:46.729 "base_bdevs_list": [ 00:09:46.729 { 00:09:46.729 "name": "BaseBdev1", 00:09:46.729 "uuid": "038757fa-597a-4079-b158-9ba8e4bf0335", 00:09:46.729 "is_configured": true, 00:09:46.729 "data_offset": 0, 00:09:46.729 "data_size": 65536 00:09:46.729 }, 00:09:46.729 { 00:09:46.729 "name": "BaseBdev2", 00:09:46.729 "uuid": "609c2d39-87be-4329-8deb-1aca777d4f8f", 00:09:46.729 "is_configured": true, 00:09:46.729 "data_offset": 0, 00:09:46.729 "data_size": 65536 00:09:46.729 }, 00:09:46.729 { 00:09:46.729 "name": "BaseBdev3", 00:09:46.729 "uuid": "603af2b6-04a6-4671-aed0-b117bd2c242b", 00:09:46.729 "is_configured": true, 00:09:46.729 "data_offset": 0, 00:09:46.729 "data_size": 65536 00:09:46.729 } 00:09:46.729 ] 00:09:46.729 } 00:09:46.729 } 00:09:46.729 }' 00:09:46.729 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:46.729 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:46.729 BaseBdev2 00:09:46.729 BaseBdev3' 00:09:46.729 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.729 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:46.729 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.729 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:46.729 03:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.729 03:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.729 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.729 03:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.729 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.729 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.729 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.729 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:46.729 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.729 03:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.729 03:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.729 03:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.729 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.729 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.729 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:46.729 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:46.729 03:12:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:46.729 03:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.729 03:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.729 03:12:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.729 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:46.729 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:46.729 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:46.729 03:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.729 03:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.729 [2024-10-09 03:12:30.010985] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:46.729 [2024-10-09 03:12:30.011018] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:46.729 [2024-10-09 03:12:30.011085] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:46.988 03:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.988 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:46.988 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:46.988 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:46.988 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:46.988 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:46.988 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:46.988 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:46.988 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:46.988 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:46.988 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:46.988 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:46.988 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.988 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.988 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.989 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.989 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.989 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:46.989 03:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.989 03:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.989 03:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.989 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.989 "name": "Existed_Raid", 00:09:46.989 "uuid": "e730db9b-0f98-4282-b570-20b82ec78bd0", 00:09:46.989 "strip_size_kb": 64, 00:09:46.989 "state": "offline", 00:09:46.989 "raid_level": "concat", 00:09:46.989 "superblock": false, 00:09:46.989 "num_base_bdevs": 3, 00:09:46.989 "num_base_bdevs_discovered": 2, 00:09:46.989 "num_base_bdevs_operational": 2, 00:09:46.989 "base_bdevs_list": [ 00:09:46.989 { 00:09:46.989 "name": null, 00:09:46.989 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:46.989 "is_configured": false, 00:09:46.989 "data_offset": 0, 00:09:46.989 "data_size": 65536 00:09:46.989 }, 00:09:46.989 { 00:09:46.989 "name": "BaseBdev2", 00:09:46.989 "uuid": "609c2d39-87be-4329-8deb-1aca777d4f8f", 00:09:46.989 "is_configured": true, 00:09:46.989 "data_offset": 0, 00:09:46.989 "data_size": 65536 00:09:46.989 }, 00:09:46.989 { 00:09:46.989 "name": "BaseBdev3", 00:09:46.989 "uuid": "603af2b6-04a6-4671-aed0-b117bd2c242b", 00:09:46.989 "is_configured": true, 00:09:46.989 "data_offset": 0, 00:09:46.989 "data_size": 65536 00:09:46.989 } 00:09:46.989 ] 00:09:46.989 }' 00:09:46.989 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.989 03:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.558 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:47.558 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:47.558 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.558 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:47.558 03:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.558 03:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.558 03:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.558 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:47.558 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:47.558 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:47.558 03:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.558 03:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.558 [2024-10-09 03:12:30.666022] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:47.558 03:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.558 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:47.558 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:47.558 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.558 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:47.558 03:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.558 03:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.558 03:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.558 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:47.558 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:47.558 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:47.558 03:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.558 03:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.558 [2024-10-09 03:12:30.826563] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:47.558 [2024-10-09 03:12:30.826709] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:47.819 03:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.819 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:47.819 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:47.819 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:47.819 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:47.819 03:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.819 03:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.819 03:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.819 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:47.819 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:47.819 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:47.819 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:47.819 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:47.819 03:12:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:47.819 03:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.819 03:12:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.819 BaseBdev2 00:09:47.819 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.819 03:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:47.819 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:47.819 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:47.819 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:47.819 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:47.819 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:47.819 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:47.819 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.819 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.819 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.819 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:47.819 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.819 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.819 [ 00:09:47.819 { 00:09:47.819 "name": "BaseBdev2", 00:09:47.819 "aliases": [ 00:09:47.819 "c7b6d365-337d-42ac-921a-39390927b750" 00:09:47.819 ], 00:09:47.819 "product_name": "Malloc disk", 00:09:47.819 "block_size": 512, 00:09:47.819 "num_blocks": 65536, 00:09:47.819 "uuid": "c7b6d365-337d-42ac-921a-39390927b750", 00:09:47.819 "assigned_rate_limits": { 00:09:47.819 "rw_ios_per_sec": 0, 00:09:47.819 "rw_mbytes_per_sec": 0, 00:09:47.819 "r_mbytes_per_sec": 0, 00:09:47.819 "w_mbytes_per_sec": 0 00:09:47.819 }, 00:09:47.819 "claimed": false, 00:09:47.819 "zoned": false, 00:09:47.819 "supported_io_types": { 00:09:47.819 "read": true, 00:09:47.819 "write": true, 00:09:47.819 "unmap": true, 00:09:47.819 "flush": true, 00:09:47.819 "reset": true, 00:09:47.819 "nvme_admin": false, 00:09:47.819 "nvme_io": false, 00:09:47.819 "nvme_io_md": false, 00:09:47.819 "write_zeroes": true, 00:09:47.819 "zcopy": true, 00:09:47.819 "get_zone_info": false, 00:09:47.819 "zone_management": false, 00:09:47.819 "zone_append": false, 00:09:47.819 "compare": false, 00:09:47.819 "compare_and_write": false, 00:09:47.819 "abort": true, 00:09:47.819 "seek_hole": false, 00:09:47.819 "seek_data": false, 00:09:47.819 "copy": true, 00:09:47.819 "nvme_iov_md": false 00:09:47.819 }, 00:09:47.819 "memory_domains": [ 00:09:47.819 { 00:09:47.819 "dma_device_id": "system", 00:09:47.819 "dma_device_type": 1 00:09:47.819 }, 00:09:47.819 { 00:09:47.819 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.819 "dma_device_type": 2 00:09:47.819 } 00:09:47.819 ], 00:09:47.819 "driver_specific": {} 00:09:47.819 } 00:09:47.819 ] 00:09:47.819 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.819 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:47.819 03:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:47.819 03:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:47.819 03:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:47.819 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.819 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.819 BaseBdev3 00:09:47.819 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.819 03:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:47.819 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:47.819 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:47.819 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:47.819 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:47.819 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:47.819 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:47.819 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.819 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.080 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.080 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:48.080 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.080 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.080 [ 00:09:48.080 { 00:09:48.080 "name": "BaseBdev3", 00:09:48.080 "aliases": [ 00:09:48.080 "9a6b7202-2386-433a-8250-c859d40d364b" 00:09:48.080 ], 00:09:48.080 "product_name": "Malloc disk", 00:09:48.080 "block_size": 512, 00:09:48.080 "num_blocks": 65536, 00:09:48.080 "uuid": "9a6b7202-2386-433a-8250-c859d40d364b", 00:09:48.080 "assigned_rate_limits": { 00:09:48.080 "rw_ios_per_sec": 0, 00:09:48.080 "rw_mbytes_per_sec": 0, 00:09:48.080 "r_mbytes_per_sec": 0, 00:09:48.080 "w_mbytes_per_sec": 0 00:09:48.080 }, 00:09:48.080 "claimed": false, 00:09:48.080 "zoned": false, 00:09:48.080 "supported_io_types": { 00:09:48.080 "read": true, 00:09:48.080 "write": true, 00:09:48.080 "unmap": true, 00:09:48.080 "flush": true, 00:09:48.080 "reset": true, 00:09:48.080 "nvme_admin": false, 00:09:48.080 "nvme_io": false, 00:09:48.080 "nvme_io_md": false, 00:09:48.080 "write_zeroes": true, 00:09:48.080 "zcopy": true, 00:09:48.080 "get_zone_info": false, 00:09:48.080 "zone_management": false, 00:09:48.080 "zone_append": false, 00:09:48.080 "compare": false, 00:09:48.080 "compare_and_write": false, 00:09:48.080 "abort": true, 00:09:48.080 "seek_hole": false, 00:09:48.080 "seek_data": false, 00:09:48.080 "copy": true, 00:09:48.080 "nvme_iov_md": false 00:09:48.080 }, 00:09:48.080 "memory_domains": [ 00:09:48.080 { 00:09:48.080 "dma_device_id": "system", 00:09:48.080 "dma_device_type": 1 00:09:48.080 }, 00:09:48.080 { 00:09:48.080 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:48.080 "dma_device_type": 2 00:09:48.080 } 00:09:48.080 ], 00:09:48.080 "driver_specific": {} 00:09:48.080 } 00:09:48.080 ] 00:09:48.080 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.080 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:48.080 03:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:48.080 03:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:48.080 03:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:48.080 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.080 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.080 [2024-10-09 03:12:31.158816] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:48.080 [2024-10-09 03:12:31.158951] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:48.080 [2024-10-09 03:12:31.158997] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:48.080 [2024-10-09 03:12:31.161050] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:48.080 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.080 03:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:48.080 03:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.080 03:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.080 03:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:48.080 03:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.080 03:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.080 03:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.080 03:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.080 03:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.080 03:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.080 03:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.080 03:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.080 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.080 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.080 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.080 03:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.080 "name": "Existed_Raid", 00:09:48.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.080 "strip_size_kb": 64, 00:09:48.080 "state": "configuring", 00:09:48.080 "raid_level": "concat", 00:09:48.080 "superblock": false, 00:09:48.080 "num_base_bdevs": 3, 00:09:48.080 "num_base_bdevs_discovered": 2, 00:09:48.080 "num_base_bdevs_operational": 3, 00:09:48.080 "base_bdevs_list": [ 00:09:48.080 { 00:09:48.080 "name": "BaseBdev1", 00:09:48.080 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.080 "is_configured": false, 00:09:48.080 "data_offset": 0, 00:09:48.080 "data_size": 0 00:09:48.080 }, 00:09:48.080 { 00:09:48.080 "name": "BaseBdev2", 00:09:48.080 "uuid": "c7b6d365-337d-42ac-921a-39390927b750", 00:09:48.080 "is_configured": true, 00:09:48.080 "data_offset": 0, 00:09:48.080 "data_size": 65536 00:09:48.080 }, 00:09:48.080 { 00:09:48.080 "name": "BaseBdev3", 00:09:48.080 "uuid": "9a6b7202-2386-433a-8250-c859d40d364b", 00:09:48.080 "is_configured": true, 00:09:48.080 "data_offset": 0, 00:09:48.080 "data_size": 65536 00:09:48.080 } 00:09:48.080 ] 00:09:48.080 }' 00:09:48.080 03:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.080 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.340 03:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:48.340 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.340 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.340 [2024-10-09 03:12:31.610060] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:48.340 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.340 03:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:48.340 03:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:48.340 03:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:48.340 03:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:48.340 03:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:48.340 03:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:48.340 03:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:48.340 03:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:48.340 03:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:48.340 03:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:48.340 03:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.340 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.340 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.340 03:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.340 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.600 03:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:48.600 "name": "Existed_Raid", 00:09:48.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.600 "strip_size_kb": 64, 00:09:48.600 "state": "configuring", 00:09:48.600 "raid_level": "concat", 00:09:48.600 "superblock": false, 00:09:48.600 "num_base_bdevs": 3, 00:09:48.600 "num_base_bdevs_discovered": 1, 00:09:48.600 "num_base_bdevs_operational": 3, 00:09:48.600 "base_bdevs_list": [ 00:09:48.600 { 00:09:48.600 "name": "BaseBdev1", 00:09:48.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.600 "is_configured": false, 00:09:48.600 "data_offset": 0, 00:09:48.600 "data_size": 0 00:09:48.600 }, 00:09:48.600 { 00:09:48.600 "name": null, 00:09:48.600 "uuid": "c7b6d365-337d-42ac-921a-39390927b750", 00:09:48.600 "is_configured": false, 00:09:48.600 "data_offset": 0, 00:09:48.600 "data_size": 65536 00:09:48.600 }, 00:09:48.600 { 00:09:48.600 "name": "BaseBdev3", 00:09:48.600 "uuid": "9a6b7202-2386-433a-8250-c859d40d364b", 00:09:48.600 "is_configured": true, 00:09:48.600 "data_offset": 0, 00:09:48.600 "data_size": 65536 00:09:48.600 } 00:09:48.600 ] 00:09:48.600 }' 00:09:48.600 03:12:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:48.600 03:12:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.861 03:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:48.861 03:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:48.861 03:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.861 03:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.861 03:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.861 03:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:48.861 03:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:48.861 03:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.861 03:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.861 [2024-10-09 03:12:32.140297] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:48.861 BaseBdev1 00:09:48.861 03:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.861 03:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:48.861 03:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:48.861 03:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:48.861 03:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:48.861 03:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:48.861 03:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:48.861 03:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:48.861 03:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.861 03:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.861 03:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:48.861 03:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:48.861 03:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:48.861 03:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.121 [ 00:09:49.121 { 00:09:49.121 "name": "BaseBdev1", 00:09:49.121 "aliases": [ 00:09:49.121 "e2870e5e-3c4a-48dc-ac9f-0903f119c440" 00:09:49.121 ], 00:09:49.121 "product_name": "Malloc disk", 00:09:49.121 "block_size": 512, 00:09:49.121 "num_blocks": 65536, 00:09:49.121 "uuid": "e2870e5e-3c4a-48dc-ac9f-0903f119c440", 00:09:49.121 "assigned_rate_limits": { 00:09:49.121 "rw_ios_per_sec": 0, 00:09:49.121 "rw_mbytes_per_sec": 0, 00:09:49.121 "r_mbytes_per_sec": 0, 00:09:49.121 "w_mbytes_per_sec": 0 00:09:49.121 }, 00:09:49.121 "claimed": true, 00:09:49.121 "claim_type": "exclusive_write", 00:09:49.121 "zoned": false, 00:09:49.121 "supported_io_types": { 00:09:49.121 "read": true, 00:09:49.121 "write": true, 00:09:49.121 "unmap": true, 00:09:49.121 "flush": true, 00:09:49.121 "reset": true, 00:09:49.121 "nvme_admin": false, 00:09:49.121 "nvme_io": false, 00:09:49.121 "nvme_io_md": false, 00:09:49.121 "write_zeroes": true, 00:09:49.121 "zcopy": true, 00:09:49.121 "get_zone_info": false, 00:09:49.121 "zone_management": false, 00:09:49.121 "zone_append": false, 00:09:49.121 "compare": false, 00:09:49.121 "compare_and_write": false, 00:09:49.121 "abort": true, 00:09:49.121 "seek_hole": false, 00:09:49.121 "seek_data": false, 00:09:49.121 "copy": true, 00:09:49.121 "nvme_iov_md": false 00:09:49.121 }, 00:09:49.121 "memory_domains": [ 00:09:49.121 { 00:09:49.121 "dma_device_id": "system", 00:09:49.121 "dma_device_type": 1 00:09:49.121 }, 00:09:49.121 { 00:09:49.121 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:49.121 "dma_device_type": 2 00:09:49.121 } 00:09:49.121 ], 00:09:49.121 "driver_specific": {} 00:09:49.121 } 00:09:49.121 ] 00:09:49.121 03:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.121 03:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:49.121 03:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:49.121 03:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.121 03:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.121 03:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:49.121 03:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.121 03:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.121 03:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.121 03:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.121 03:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.121 03:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.121 03:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.121 03:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.121 03:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.121 03:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.121 03:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.121 03:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.121 "name": "Existed_Raid", 00:09:49.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.121 "strip_size_kb": 64, 00:09:49.121 "state": "configuring", 00:09:49.121 "raid_level": "concat", 00:09:49.121 "superblock": false, 00:09:49.121 "num_base_bdevs": 3, 00:09:49.121 "num_base_bdevs_discovered": 2, 00:09:49.121 "num_base_bdevs_operational": 3, 00:09:49.121 "base_bdevs_list": [ 00:09:49.121 { 00:09:49.121 "name": "BaseBdev1", 00:09:49.121 "uuid": "e2870e5e-3c4a-48dc-ac9f-0903f119c440", 00:09:49.121 "is_configured": true, 00:09:49.121 "data_offset": 0, 00:09:49.121 "data_size": 65536 00:09:49.121 }, 00:09:49.121 { 00:09:49.121 "name": null, 00:09:49.121 "uuid": "c7b6d365-337d-42ac-921a-39390927b750", 00:09:49.121 "is_configured": false, 00:09:49.121 "data_offset": 0, 00:09:49.121 "data_size": 65536 00:09:49.121 }, 00:09:49.121 { 00:09:49.121 "name": "BaseBdev3", 00:09:49.121 "uuid": "9a6b7202-2386-433a-8250-c859d40d364b", 00:09:49.121 "is_configured": true, 00:09:49.121 "data_offset": 0, 00:09:49.121 "data_size": 65536 00:09:49.121 } 00:09:49.121 ] 00:09:49.121 }' 00:09:49.121 03:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.121 03:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.380 03:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.380 03:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:49.380 03:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.380 03:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.380 03:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.380 03:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:09:49.380 03:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:09:49.381 03:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.381 03:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.381 [2024-10-09 03:12:32.679456] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:49.639 03:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.639 03:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:49.639 03:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.639 03:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.639 03:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:49.639 03:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.639 03:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.639 03:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.639 03:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.639 03:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.639 03:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.639 03:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.639 03:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.639 03:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.639 03:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.639 03:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.639 03:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.639 "name": "Existed_Raid", 00:09:49.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:49.639 "strip_size_kb": 64, 00:09:49.639 "state": "configuring", 00:09:49.639 "raid_level": "concat", 00:09:49.639 "superblock": false, 00:09:49.639 "num_base_bdevs": 3, 00:09:49.639 "num_base_bdevs_discovered": 1, 00:09:49.639 "num_base_bdevs_operational": 3, 00:09:49.639 "base_bdevs_list": [ 00:09:49.639 { 00:09:49.639 "name": "BaseBdev1", 00:09:49.639 "uuid": "e2870e5e-3c4a-48dc-ac9f-0903f119c440", 00:09:49.639 "is_configured": true, 00:09:49.639 "data_offset": 0, 00:09:49.639 "data_size": 65536 00:09:49.639 }, 00:09:49.639 { 00:09:49.639 "name": null, 00:09:49.639 "uuid": "c7b6d365-337d-42ac-921a-39390927b750", 00:09:49.639 "is_configured": false, 00:09:49.639 "data_offset": 0, 00:09:49.639 "data_size": 65536 00:09:49.639 }, 00:09:49.639 { 00:09:49.639 "name": null, 00:09:49.639 "uuid": "9a6b7202-2386-433a-8250-c859d40d364b", 00:09:49.639 "is_configured": false, 00:09:49.639 "data_offset": 0, 00:09:49.639 "data_size": 65536 00:09:49.639 } 00:09:49.639 ] 00:09:49.639 }' 00:09:49.640 03:12:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.640 03:12:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.899 03:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.899 03:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.899 03:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.899 03:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:49.899 03:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.899 03:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:09:49.899 03:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:49.899 03:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.899 03:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.899 [2024-10-09 03:12:33.178766] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:49.899 03:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:49.899 03:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:49.899 03:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:49.899 03:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:49.899 03:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:49.899 03:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:49.899 03:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:49.899 03:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.900 03:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.900 03:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.900 03:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.900 03:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:49.900 03:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.900 03:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:49.900 03:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.164 03:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.164 03:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.164 "name": "Existed_Raid", 00:09:50.164 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.164 "strip_size_kb": 64, 00:09:50.164 "state": "configuring", 00:09:50.164 "raid_level": "concat", 00:09:50.164 "superblock": false, 00:09:50.164 "num_base_bdevs": 3, 00:09:50.164 "num_base_bdevs_discovered": 2, 00:09:50.164 "num_base_bdevs_operational": 3, 00:09:50.164 "base_bdevs_list": [ 00:09:50.164 { 00:09:50.164 "name": "BaseBdev1", 00:09:50.164 "uuid": "e2870e5e-3c4a-48dc-ac9f-0903f119c440", 00:09:50.164 "is_configured": true, 00:09:50.164 "data_offset": 0, 00:09:50.164 "data_size": 65536 00:09:50.164 }, 00:09:50.164 { 00:09:50.164 "name": null, 00:09:50.164 "uuid": "c7b6d365-337d-42ac-921a-39390927b750", 00:09:50.164 "is_configured": false, 00:09:50.164 "data_offset": 0, 00:09:50.164 "data_size": 65536 00:09:50.164 }, 00:09:50.164 { 00:09:50.164 "name": "BaseBdev3", 00:09:50.164 "uuid": "9a6b7202-2386-433a-8250-c859d40d364b", 00:09:50.164 "is_configured": true, 00:09:50.164 "data_offset": 0, 00:09:50.164 "data_size": 65536 00:09:50.164 } 00:09:50.164 ] 00:09:50.164 }' 00:09:50.164 03:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.164 03:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.436 03:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.436 03:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:50.436 03:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.436 03:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.436 03:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.436 03:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:09:50.436 03:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:50.436 03:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.436 03:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.436 [2024-10-09 03:12:33.650020] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:50.695 03:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.695 03:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:50.695 03:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.695 03:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.695 03:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.695 03:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.695 03:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.695 03:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.695 03:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.695 03:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.695 03:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.695 03:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.695 03:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.695 03:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.695 03:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.695 03:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.695 03:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:50.695 "name": "Existed_Raid", 00:09:50.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:50.695 "strip_size_kb": 64, 00:09:50.695 "state": "configuring", 00:09:50.695 "raid_level": "concat", 00:09:50.695 "superblock": false, 00:09:50.695 "num_base_bdevs": 3, 00:09:50.695 "num_base_bdevs_discovered": 1, 00:09:50.695 "num_base_bdevs_operational": 3, 00:09:50.696 "base_bdevs_list": [ 00:09:50.696 { 00:09:50.696 "name": null, 00:09:50.696 "uuid": "e2870e5e-3c4a-48dc-ac9f-0903f119c440", 00:09:50.696 "is_configured": false, 00:09:50.696 "data_offset": 0, 00:09:50.696 "data_size": 65536 00:09:50.696 }, 00:09:50.696 { 00:09:50.696 "name": null, 00:09:50.696 "uuid": "c7b6d365-337d-42ac-921a-39390927b750", 00:09:50.696 "is_configured": false, 00:09:50.696 "data_offset": 0, 00:09:50.696 "data_size": 65536 00:09:50.696 }, 00:09:50.696 { 00:09:50.696 "name": "BaseBdev3", 00:09:50.696 "uuid": "9a6b7202-2386-433a-8250-c859d40d364b", 00:09:50.696 "is_configured": true, 00:09:50.696 "data_offset": 0, 00:09:50.696 "data_size": 65536 00:09:50.696 } 00:09:50.696 ] 00:09:50.696 }' 00:09:50.696 03:12:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:50.696 03:12:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.954 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.954 03:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.954 03:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.954 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:50.954 03:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.954 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:09:50.954 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:50.954 03:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.954 03:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.954 [2024-10-09 03:12:34.239018] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:50.954 03:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:50.954 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:50.954 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:50.954 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:50.954 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:50.954 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:50.954 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:50.954 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:50.954 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:50.954 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:50.954 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:50.954 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.954 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:50.954 03:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:50.954 03:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.213 03:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.213 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.213 "name": "Existed_Raid", 00:09:51.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.213 "strip_size_kb": 64, 00:09:51.213 "state": "configuring", 00:09:51.213 "raid_level": "concat", 00:09:51.213 "superblock": false, 00:09:51.213 "num_base_bdevs": 3, 00:09:51.213 "num_base_bdevs_discovered": 2, 00:09:51.213 "num_base_bdevs_operational": 3, 00:09:51.213 "base_bdevs_list": [ 00:09:51.213 { 00:09:51.213 "name": null, 00:09:51.213 "uuid": "e2870e5e-3c4a-48dc-ac9f-0903f119c440", 00:09:51.213 "is_configured": false, 00:09:51.213 "data_offset": 0, 00:09:51.213 "data_size": 65536 00:09:51.213 }, 00:09:51.213 { 00:09:51.213 "name": "BaseBdev2", 00:09:51.213 "uuid": "c7b6d365-337d-42ac-921a-39390927b750", 00:09:51.213 "is_configured": true, 00:09:51.213 "data_offset": 0, 00:09:51.213 "data_size": 65536 00:09:51.213 }, 00:09:51.213 { 00:09:51.213 "name": "BaseBdev3", 00:09:51.213 "uuid": "9a6b7202-2386-433a-8250-c859d40d364b", 00:09:51.213 "is_configured": true, 00:09:51.213 "data_offset": 0, 00:09:51.213 "data_size": 65536 00:09:51.213 } 00:09:51.213 ] 00:09:51.213 }' 00:09:51.213 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.213 03:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.473 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.473 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:51.473 03:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.473 03:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.473 03:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.473 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:09:51.473 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:51.473 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.473 03:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.473 03:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.732 03:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.732 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e2870e5e-3c4a-48dc-ac9f-0903f119c440 00:09:51.732 03:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.732 03:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.732 [2024-10-09 03:12:34.834419] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:51.732 [2024-10-09 03:12:34.834565] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:51.732 [2024-10-09 03:12:34.834584] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:51.732 [2024-10-09 03:12:34.834919] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:51.732 [2024-10-09 03:12:34.835107] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:51.732 [2024-10-09 03:12:34.835117] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:09:51.732 [2024-10-09 03:12:34.835388] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:51.732 NewBaseBdev 00:09:51.732 03:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.732 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:09:51.732 03:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:09:51.732 03:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:51.732 03:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:09:51.732 03:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:51.732 03:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:51.732 03:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:51.732 03:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.732 03:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.732 03:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.732 03:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:51.732 03:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.732 03:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.732 [ 00:09:51.732 { 00:09:51.732 "name": "NewBaseBdev", 00:09:51.732 "aliases": [ 00:09:51.732 "e2870e5e-3c4a-48dc-ac9f-0903f119c440" 00:09:51.733 ], 00:09:51.733 "product_name": "Malloc disk", 00:09:51.733 "block_size": 512, 00:09:51.733 "num_blocks": 65536, 00:09:51.733 "uuid": "e2870e5e-3c4a-48dc-ac9f-0903f119c440", 00:09:51.733 "assigned_rate_limits": { 00:09:51.733 "rw_ios_per_sec": 0, 00:09:51.733 "rw_mbytes_per_sec": 0, 00:09:51.733 "r_mbytes_per_sec": 0, 00:09:51.733 "w_mbytes_per_sec": 0 00:09:51.733 }, 00:09:51.733 "claimed": true, 00:09:51.733 "claim_type": "exclusive_write", 00:09:51.733 "zoned": false, 00:09:51.733 "supported_io_types": { 00:09:51.733 "read": true, 00:09:51.733 "write": true, 00:09:51.733 "unmap": true, 00:09:51.733 "flush": true, 00:09:51.733 "reset": true, 00:09:51.733 "nvme_admin": false, 00:09:51.733 "nvme_io": false, 00:09:51.733 "nvme_io_md": false, 00:09:51.733 "write_zeroes": true, 00:09:51.733 "zcopy": true, 00:09:51.733 "get_zone_info": false, 00:09:51.733 "zone_management": false, 00:09:51.733 "zone_append": false, 00:09:51.733 "compare": false, 00:09:51.733 "compare_and_write": false, 00:09:51.733 "abort": true, 00:09:51.733 "seek_hole": false, 00:09:51.733 "seek_data": false, 00:09:51.733 "copy": true, 00:09:51.733 "nvme_iov_md": false 00:09:51.733 }, 00:09:51.733 "memory_domains": [ 00:09:51.733 { 00:09:51.733 "dma_device_id": "system", 00:09:51.733 "dma_device_type": 1 00:09:51.733 }, 00:09:51.733 { 00:09:51.733 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:51.733 "dma_device_type": 2 00:09:51.733 } 00:09:51.733 ], 00:09:51.733 "driver_specific": {} 00:09:51.733 } 00:09:51.733 ] 00:09:51.733 03:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.733 03:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:09:51.733 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:51.733 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:51.733 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:51.733 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:51.733 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:51.733 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:51.733 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.733 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.733 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.733 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.733 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.733 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:51.733 03:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:51.733 03:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.733 03:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:51.733 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.733 "name": "Existed_Raid", 00:09:51.733 "uuid": "3ca30ba8-202e-4691-9cd6-b9918616fcef", 00:09:51.733 "strip_size_kb": 64, 00:09:51.733 "state": "online", 00:09:51.733 "raid_level": "concat", 00:09:51.733 "superblock": false, 00:09:51.733 "num_base_bdevs": 3, 00:09:51.733 "num_base_bdevs_discovered": 3, 00:09:51.733 "num_base_bdevs_operational": 3, 00:09:51.733 "base_bdevs_list": [ 00:09:51.733 { 00:09:51.733 "name": "NewBaseBdev", 00:09:51.733 "uuid": "e2870e5e-3c4a-48dc-ac9f-0903f119c440", 00:09:51.733 "is_configured": true, 00:09:51.733 "data_offset": 0, 00:09:51.733 "data_size": 65536 00:09:51.733 }, 00:09:51.733 { 00:09:51.733 "name": "BaseBdev2", 00:09:51.733 "uuid": "c7b6d365-337d-42ac-921a-39390927b750", 00:09:51.733 "is_configured": true, 00:09:51.733 "data_offset": 0, 00:09:51.733 "data_size": 65536 00:09:51.733 }, 00:09:51.733 { 00:09:51.733 "name": "BaseBdev3", 00:09:51.733 "uuid": "9a6b7202-2386-433a-8250-c859d40d364b", 00:09:51.733 "is_configured": true, 00:09:51.733 "data_offset": 0, 00:09:51.733 "data_size": 65536 00:09:51.733 } 00:09:51.733 ] 00:09:51.733 }' 00:09:51.733 03:12:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.733 03:12:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.302 03:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:09:52.302 03:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:52.302 03:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:52.302 03:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:52.302 03:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:52.302 03:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:52.302 03:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:52.302 03:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:52.302 03:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.302 03:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.302 [2024-10-09 03:12:35.314010] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:52.302 03:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.302 03:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:52.303 "name": "Existed_Raid", 00:09:52.303 "aliases": [ 00:09:52.303 "3ca30ba8-202e-4691-9cd6-b9918616fcef" 00:09:52.303 ], 00:09:52.303 "product_name": "Raid Volume", 00:09:52.303 "block_size": 512, 00:09:52.303 "num_blocks": 196608, 00:09:52.303 "uuid": "3ca30ba8-202e-4691-9cd6-b9918616fcef", 00:09:52.303 "assigned_rate_limits": { 00:09:52.303 "rw_ios_per_sec": 0, 00:09:52.303 "rw_mbytes_per_sec": 0, 00:09:52.303 "r_mbytes_per_sec": 0, 00:09:52.303 "w_mbytes_per_sec": 0 00:09:52.303 }, 00:09:52.303 "claimed": false, 00:09:52.303 "zoned": false, 00:09:52.303 "supported_io_types": { 00:09:52.303 "read": true, 00:09:52.303 "write": true, 00:09:52.303 "unmap": true, 00:09:52.303 "flush": true, 00:09:52.303 "reset": true, 00:09:52.303 "nvme_admin": false, 00:09:52.303 "nvme_io": false, 00:09:52.303 "nvme_io_md": false, 00:09:52.303 "write_zeroes": true, 00:09:52.303 "zcopy": false, 00:09:52.303 "get_zone_info": false, 00:09:52.303 "zone_management": false, 00:09:52.303 "zone_append": false, 00:09:52.303 "compare": false, 00:09:52.303 "compare_and_write": false, 00:09:52.303 "abort": false, 00:09:52.303 "seek_hole": false, 00:09:52.303 "seek_data": false, 00:09:52.303 "copy": false, 00:09:52.303 "nvme_iov_md": false 00:09:52.303 }, 00:09:52.303 "memory_domains": [ 00:09:52.303 { 00:09:52.303 "dma_device_id": "system", 00:09:52.303 "dma_device_type": 1 00:09:52.303 }, 00:09:52.303 { 00:09:52.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.303 "dma_device_type": 2 00:09:52.303 }, 00:09:52.303 { 00:09:52.303 "dma_device_id": "system", 00:09:52.303 "dma_device_type": 1 00:09:52.303 }, 00:09:52.303 { 00:09:52.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.303 "dma_device_type": 2 00:09:52.303 }, 00:09:52.303 { 00:09:52.303 "dma_device_id": "system", 00:09:52.303 "dma_device_type": 1 00:09:52.303 }, 00:09:52.303 { 00:09:52.303 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:52.303 "dma_device_type": 2 00:09:52.303 } 00:09:52.303 ], 00:09:52.303 "driver_specific": { 00:09:52.303 "raid": { 00:09:52.303 "uuid": "3ca30ba8-202e-4691-9cd6-b9918616fcef", 00:09:52.303 "strip_size_kb": 64, 00:09:52.303 "state": "online", 00:09:52.303 "raid_level": "concat", 00:09:52.303 "superblock": false, 00:09:52.303 "num_base_bdevs": 3, 00:09:52.303 "num_base_bdevs_discovered": 3, 00:09:52.303 "num_base_bdevs_operational": 3, 00:09:52.303 "base_bdevs_list": [ 00:09:52.303 { 00:09:52.303 "name": "NewBaseBdev", 00:09:52.303 "uuid": "e2870e5e-3c4a-48dc-ac9f-0903f119c440", 00:09:52.303 "is_configured": true, 00:09:52.303 "data_offset": 0, 00:09:52.303 "data_size": 65536 00:09:52.303 }, 00:09:52.303 { 00:09:52.303 "name": "BaseBdev2", 00:09:52.303 "uuid": "c7b6d365-337d-42ac-921a-39390927b750", 00:09:52.303 "is_configured": true, 00:09:52.303 "data_offset": 0, 00:09:52.303 "data_size": 65536 00:09:52.303 }, 00:09:52.303 { 00:09:52.303 "name": "BaseBdev3", 00:09:52.303 "uuid": "9a6b7202-2386-433a-8250-c859d40d364b", 00:09:52.303 "is_configured": true, 00:09:52.303 "data_offset": 0, 00:09:52.303 "data_size": 65536 00:09:52.303 } 00:09:52.303 ] 00:09:52.303 } 00:09:52.303 } 00:09:52.303 }' 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:09:52.303 BaseBdev2 00:09:52.303 BaseBdev3' 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.303 [2024-10-09 03:12:35.553202] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:52.303 [2024-10-09 03:12:35.553242] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:52.303 [2024-10-09 03:12:35.553328] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:52.303 [2024-10-09 03:12:35.553392] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:52.303 [2024-10-09 03:12:35.553406] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65726 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 65726 ']' 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 65726 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 65726 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:52.303 killing process with pid 65726 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 65726' 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 65726 00:09:52.303 [2024-10-09 03:12:35.595387] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:52.303 03:12:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 65726 00:09:52.870 [2024-10-09 03:12:35.925423] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:54.250 00:09:54.250 real 0m11.066s 00:09:54.250 user 0m17.254s 00:09:54.250 sys 0m2.021s 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:54.250 ************************************ 00:09:54.250 END TEST raid_state_function_test 00:09:54.250 ************************************ 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.250 03:12:37 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:09:54.250 03:12:37 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:09:54.250 03:12:37 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:54.250 03:12:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:54.250 ************************************ 00:09:54.250 START TEST raid_state_function_test_sb 00:09:54.250 ************************************ 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 3 true 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66353 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66353' 00:09:54.250 Process raid pid: 66353 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66353 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 66353 ']' 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:54.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:54.250 03:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.250 [2024-10-09 03:12:37.465451] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:09:54.250 [2024-10-09 03:12:37.465642] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:54.508 [2024-10-09 03:12:37.612556] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:54.767 [2024-10-09 03:12:37.870193] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:09:55.027 [2024-10-09 03:12:38.116441] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.027 [2024-10-09 03:12:38.116597] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:55.027 03:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:55.027 03:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:09:55.027 03:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:55.027 03:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.027 03:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.027 [2024-10-09 03:12:38.316357] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:55.027 [2024-10-09 03:12:38.316529] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:55.027 [2024-10-09 03:12:38.316564] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:55.027 [2024-10-09 03:12:38.316590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:55.027 [2024-10-09 03:12:38.316612] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:55.027 [2024-10-09 03:12:38.316633] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:55.027 03:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.027 03:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:55.027 03:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.027 03:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.027 03:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:55.027 03:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.027 03:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.027 03:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.027 03:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.027 03:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.027 03:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.027 03:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.027 03:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.027 03:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.027 03:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.287 03:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.287 03:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.287 "name": "Existed_Raid", 00:09:55.287 "uuid": "d7be685a-4d10-4687-bfa6-5a9e8960be26", 00:09:55.287 "strip_size_kb": 64, 00:09:55.287 "state": "configuring", 00:09:55.287 "raid_level": "concat", 00:09:55.287 "superblock": true, 00:09:55.287 "num_base_bdevs": 3, 00:09:55.287 "num_base_bdevs_discovered": 0, 00:09:55.287 "num_base_bdevs_operational": 3, 00:09:55.287 "base_bdevs_list": [ 00:09:55.287 { 00:09:55.287 "name": "BaseBdev1", 00:09:55.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.287 "is_configured": false, 00:09:55.287 "data_offset": 0, 00:09:55.287 "data_size": 0 00:09:55.287 }, 00:09:55.287 { 00:09:55.287 "name": "BaseBdev2", 00:09:55.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.287 "is_configured": false, 00:09:55.287 "data_offset": 0, 00:09:55.287 "data_size": 0 00:09:55.287 }, 00:09:55.287 { 00:09:55.287 "name": "BaseBdev3", 00:09:55.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.287 "is_configured": false, 00:09:55.287 "data_offset": 0, 00:09:55.287 "data_size": 0 00:09:55.287 } 00:09:55.287 ] 00:09:55.287 }' 00:09:55.287 03:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.287 03:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.547 03:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:55.547 03:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.547 03:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.547 [2024-10-09 03:12:38.683604] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:55.547 [2024-10-09 03:12:38.683658] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:55.547 03:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.547 03:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:55.547 03:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.547 03:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.547 [2024-10-09 03:12:38.691600] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:55.548 [2024-10-09 03:12:38.691688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:55.548 [2024-10-09 03:12:38.691717] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:55.548 [2024-10-09 03:12:38.691740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:55.548 [2024-10-09 03:12:38.691758] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:55.548 [2024-10-09 03:12:38.691779] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.548 [2024-10-09 03:12:38.752804] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:55.548 BaseBdev1 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.548 [ 00:09:55.548 { 00:09:55.548 "name": "BaseBdev1", 00:09:55.548 "aliases": [ 00:09:55.548 "b1511897-6e5e-42b9-9272-ca11d921ccbe" 00:09:55.548 ], 00:09:55.548 "product_name": "Malloc disk", 00:09:55.548 "block_size": 512, 00:09:55.548 "num_blocks": 65536, 00:09:55.548 "uuid": "b1511897-6e5e-42b9-9272-ca11d921ccbe", 00:09:55.548 "assigned_rate_limits": { 00:09:55.548 "rw_ios_per_sec": 0, 00:09:55.548 "rw_mbytes_per_sec": 0, 00:09:55.548 "r_mbytes_per_sec": 0, 00:09:55.548 "w_mbytes_per_sec": 0 00:09:55.548 }, 00:09:55.548 "claimed": true, 00:09:55.548 "claim_type": "exclusive_write", 00:09:55.548 "zoned": false, 00:09:55.548 "supported_io_types": { 00:09:55.548 "read": true, 00:09:55.548 "write": true, 00:09:55.548 "unmap": true, 00:09:55.548 "flush": true, 00:09:55.548 "reset": true, 00:09:55.548 "nvme_admin": false, 00:09:55.548 "nvme_io": false, 00:09:55.548 "nvme_io_md": false, 00:09:55.548 "write_zeroes": true, 00:09:55.548 "zcopy": true, 00:09:55.548 "get_zone_info": false, 00:09:55.548 "zone_management": false, 00:09:55.548 "zone_append": false, 00:09:55.548 "compare": false, 00:09:55.548 "compare_and_write": false, 00:09:55.548 "abort": true, 00:09:55.548 "seek_hole": false, 00:09:55.548 "seek_data": false, 00:09:55.548 "copy": true, 00:09:55.548 "nvme_iov_md": false 00:09:55.548 }, 00:09:55.548 "memory_domains": [ 00:09:55.548 { 00:09:55.548 "dma_device_id": "system", 00:09:55.548 "dma_device_type": 1 00:09:55.548 }, 00:09:55.548 { 00:09:55.548 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:55.548 "dma_device_type": 2 00:09:55.548 } 00:09:55.548 ], 00:09:55.548 "driver_specific": {} 00:09:55.548 } 00:09:55.548 ] 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.548 "name": "Existed_Raid", 00:09:55.548 "uuid": "3674f715-5d12-4616-aa7d-44cdc0897aca", 00:09:55.548 "strip_size_kb": 64, 00:09:55.548 "state": "configuring", 00:09:55.548 "raid_level": "concat", 00:09:55.548 "superblock": true, 00:09:55.548 "num_base_bdevs": 3, 00:09:55.548 "num_base_bdevs_discovered": 1, 00:09:55.548 "num_base_bdevs_operational": 3, 00:09:55.548 "base_bdevs_list": [ 00:09:55.548 { 00:09:55.548 "name": "BaseBdev1", 00:09:55.548 "uuid": "b1511897-6e5e-42b9-9272-ca11d921ccbe", 00:09:55.548 "is_configured": true, 00:09:55.548 "data_offset": 2048, 00:09:55.548 "data_size": 63488 00:09:55.548 }, 00:09:55.548 { 00:09:55.548 "name": "BaseBdev2", 00:09:55.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.548 "is_configured": false, 00:09:55.548 "data_offset": 0, 00:09:55.548 "data_size": 0 00:09:55.548 }, 00:09:55.548 { 00:09:55.548 "name": "BaseBdev3", 00:09:55.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.548 "is_configured": false, 00:09:55.548 "data_offset": 0, 00:09:55.548 "data_size": 0 00:09:55.548 } 00:09:55.548 ] 00:09:55.548 }' 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.548 03:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.120 03:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:56.120 03:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.120 03:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.120 [2024-10-09 03:12:39.228002] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:56.120 [2024-10-09 03:12:39.228107] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:56.120 03:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.120 03:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:56.120 03:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.120 03:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.120 [2024-10-09 03:12:39.240040] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:56.120 [2024-10-09 03:12:39.242194] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:56.120 [2024-10-09 03:12:39.242280] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:56.120 [2024-10-09 03:12:39.242294] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:56.120 [2024-10-09 03:12:39.242304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:56.120 03:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.120 03:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:56.120 03:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:56.120 03:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:56.120 03:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.120 03:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.120 03:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:56.120 03:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.120 03:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.120 03:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.120 03:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.120 03:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.120 03:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.120 03:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.120 03:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.120 03:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.121 03:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.121 03:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.121 03:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.121 "name": "Existed_Raid", 00:09:56.121 "uuid": "e0a93f55-0a38-4901-be7e-ee5fb24e806e", 00:09:56.121 "strip_size_kb": 64, 00:09:56.121 "state": "configuring", 00:09:56.121 "raid_level": "concat", 00:09:56.121 "superblock": true, 00:09:56.121 "num_base_bdevs": 3, 00:09:56.121 "num_base_bdevs_discovered": 1, 00:09:56.121 "num_base_bdevs_operational": 3, 00:09:56.121 "base_bdevs_list": [ 00:09:56.121 { 00:09:56.121 "name": "BaseBdev1", 00:09:56.121 "uuid": "b1511897-6e5e-42b9-9272-ca11d921ccbe", 00:09:56.121 "is_configured": true, 00:09:56.121 "data_offset": 2048, 00:09:56.121 "data_size": 63488 00:09:56.121 }, 00:09:56.121 { 00:09:56.121 "name": "BaseBdev2", 00:09:56.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.121 "is_configured": false, 00:09:56.121 "data_offset": 0, 00:09:56.121 "data_size": 0 00:09:56.121 }, 00:09:56.121 { 00:09:56.121 "name": "BaseBdev3", 00:09:56.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.121 "is_configured": false, 00:09:56.121 "data_offset": 0, 00:09:56.121 "data_size": 0 00:09:56.121 } 00:09:56.121 ] 00:09:56.121 }' 00:09:56.121 03:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.121 03:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.385 03:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:56.385 03:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.385 03:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.385 [2024-10-09 03:12:39.639586] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:56.385 BaseBdev2 00:09:56.385 03:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.385 03:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:56.385 03:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:56.385 03:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:56.385 03:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:56.385 03:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:56.385 03:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:56.385 03:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:56.385 03:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.385 03:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.385 03:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.385 03:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:56.385 03:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.385 03:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.385 [ 00:09:56.385 { 00:09:56.385 "name": "BaseBdev2", 00:09:56.385 "aliases": [ 00:09:56.385 "c0f8961f-77c6-47da-85ff-6fea43436736" 00:09:56.385 ], 00:09:56.385 "product_name": "Malloc disk", 00:09:56.385 "block_size": 512, 00:09:56.385 "num_blocks": 65536, 00:09:56.385 "uuid": "c0f8961f-77c6-47da-85ff-6fea43436736", 00:09:56.385 "assigned_rate_limits": { 00:09:56.385 "rw_ios_per_sec": 0, 00:09:56.385 "rw_mbytes_per_sec": 0, 00:09:56.385 "r_mbytes_per_sec": 0, 00:09:56.385 "w_mbytes_per_sec": 0 00:09:56.385 }, 00:09:56.385 "claimed": true, 00:09:56.385 "claim_type": "exclusive_write", 00:09:56.385 "zoned": false, 00:09:56.385 "supported_io_types": { 00:09:56.385 "read": true, 00:09:56.385 "write": true, 00:09:56.385 "unmap": true, 00:09:56.385 "flush": true, 00:09:56.385 "reset": true, 00:09:56.385 "nvme_admin": false, 00:09:56.385 "nvme_io": false, 00:09:56.385 "nvme_io_md": false, 00:09:56.385 "write_zeroes": true, 00:09:56.385 "zcopy": true, 00:09:56.385 "get_zone_info": false, 00:09:56.385 "zone_management": false, 00:09:56.385 "zone_append": false, 00:09:56.385 "compare": false, 00:09:56.385 "compare_and_write": false, 00:09:56.385 "abort": true, 00:09:56.385 "seek_hole": false, 00:09:56.385 "seek_data": false, 00:09:56.385 "copy": true, 00:09:56.385 "nvme_iov_md": false 00:09:56.385 }, 00:09:56.385 "memory_domains": [ 00:09:56.385 { 00:09:56.385 "dma_device_id": "system", 00:09:56.385 "dma_device_type": 1 00:09:56.385 }, 00:09:56.385 { 00:09:56.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.385 "dma_device_type": 2 00:09:56.385 } 00:09:56.385 ], 00:09:56.385 "driver_specific": {} 00:09:56.385 } 00:09:56.385 ] 00:09:56.385 03:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.385 03:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:56.385 03:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:56.385 03:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:56.385 03:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:56.385 03:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.385 03:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.385 03:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:56.385 03:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.385 03:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.385 03:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.385 03:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.385 03:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.385 03:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.385 03:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.385 03:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.385 03:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.385 03:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.643 03:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.643 03:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.643 "name": "Existed_Raid", 00:09:56.643 "uuid": "e0a93f55-0a38-4901-be7e-ee5fb24e806e", 00:09:56.643 "strip_size_kb": 64, 00:09:56.643 "state": "configuring", 00:09:56.643 "raid_level": "concat", 00:09:56.643 "superblock": true, 00:09:56.643 "num_base_bdevs": 3, 00:09:56.643 "num_base_bdevs_discovered": 2, 00:09:56.643 "num_base_bdevs_operational": 3, 00:09:56.643 "base_bdevs_list": [ 00:09:56.643 { 00:09:56.643 "name": "BaseBdev1", 00:09:56.643 "uuid": "b1511897-6e5e-42b9-9272-ca11d921ccbe", 00:09:56.643 "is_configured": true, 00:09:56.643 "data_offset": 2048, 00:09:56.643 "data_size": 63488 00:09:56.643 }, 00:09:56.643 { 00:09:56.643 "name": "BaseBdev2", 00:09:56.643 "uuid": "c0f8961f-77c6-47da-85ff-6fea43436736", 00:09:56.643 "is_configured": true, 00:09:56.643 "data_offset": 2048, 00:09:56.643 "data_size": 63488 00:09:56.643 }, 00:09:56.643 { 00:09:56.643 "name": "BaseBdev3", 00:09:56.643 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.643 "is_configured": false, 00:09:56.643 "data_offset": 0, 00:09:56.643 "data_size": 0 00:09:56.643 } 00:09:56.643 ] 00:09:56.643 }' 00:09:56.643 03:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.643 03:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.901 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:56.901 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.901 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.901 [2024-10-09 03:12:40.113628] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:56.901 [2024-10-09 03:12:40.114077] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:56.901 [2024-10-09 03:12:40.114150] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:56.901 [2024-10-09 03:12:40.114504] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:56.901 BaseBdev3 00:09:56.901 [2024-10-09 03:12:40.114729] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:56.901 [2024-10-09 03:12:40.114743] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:56.901 [2024-10-09 03:12:40.114931] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:56.901 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.901 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:56.901 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:56.901 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:56.901 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:56.901 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:56.901 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:56.901 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:56.901 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.901 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.901 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.901 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:56.901 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.901 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.901 [ 00:09:56.901 { 00:09:56.901 "name": "BaseBdev3", 00:09:56.901 "aliases": [ 00:09:56.901 "4eba9548-ab9d-47ec-8043-30f6ee6ab2fb" 00:09:56.901 ], 00:09:56.901 "product_name": "Malloc disk", 00:09:56.901 "block_size": 512, 00:09:56.901 "num_blocks": 65536, 00:09:56.901 "uuid": "4eba9548-ab9d-47ec-8043-30f6ee6ab2fb", 00:09:56.901 "assigned_rate_limits": { 00:09:56.901 "rw_ios_per_sec": 0, 00:09:56.901 "rw_mbytes_per_sec": 0, 00:09:56.901 "r_mbytes_per_sec": 0, 00:09:56.901 "w_mbytes_per_sec": 0 00:09:56.901 }, 00:09:56.901 "claimed": true, 00:09:56.901 "claim_type": "exclusive_write", 00:09:56.901 "zoned": false, 00:09:56.901 "supported_io_types": { 00:09:56.901 "read": true, 00:09:56.901 "write": true, 00:09:56.901 "unmap": true, 00:09:56.901 "flush": true, 00:09:56.901 "reset": true, 00:09:56.901 "nvme_admin": false, 00:09:56.901 "nvme_io": false, 00:09:56.901 "nvme_io_md": false, 00:09:56.901 "write_zeroes": true, 00:09:56.901 "zcopy": true, 00:09:56.901 "get_zone_info": false, 00:09:56.901 "zone_management": false, 00:09:56.901 "zone_append": false, 00:09:56.901 "compare": false, 00:09:56.901 "compare_and_write": false, 00:09:56.901 "abort": true, 00:09:56.901 "seek_hole": false, 00:09:56.901 "seek_data": false, 00:09:56.901 "copy": true, 00:09:56.901 "nvme_iov_md": false 00:09:56.901 }, 00:09:56.901 "memory_domains": [ 00:09:56.901 { 00:09:56.901 "dma_device_id": "system", 00:09:56.901 "dma_device_type": 1 00:09:56.901 }, 00:09:56.901 { 00:09:56.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.901 "dma_device_type": 2 00:09:56.901 } 00:09:56.901 ], 00:09:56.901 "driver_specific": {} 00:09:56.901 } 00:09:56.901 ] 00:09:56.901 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.901 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:56.901 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:56.901 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:56.901 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:09:56.901 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.901 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:56.901 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:56.901 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.901 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.901 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.901 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.901 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.901 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.901 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.901 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.901 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:56.901 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.901 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:56.901 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.902 "name": "Existed_Raid", 00:09:56.902 "uuid": "e0a93f55-0a38-4901-be7e-ee5fb24e806e", 00:09:56.902 "strip_size_kb": 64, 00:09:56.902 "state": "online", 00:09:56.902 "raid_level": "concat", 00:09:56.902 "superblock": true, 00:09:56.902 "num_base_bdevs": 3, 00:09:56.902 "num_base_bdevs_discovered": 3, 00:09:56.902 "num_base_bdevs_operational": 3, 00:09:56.902 "base_bdevs_list": [ 00:09:56.902 { 00:09:56.902 "name": "BaseBdev1", 00:09:56.902 "uuid": "b1511897-6e5e-42b9-9272-ca11d921ccbe", 00:09:56.902 "is_configured": true, 00:09:56.902 "data_offset": 2048, 00:09:56.902 "data_size": 63488 00:09:56.902 }, 00:09:56.902 { 00:09:56.902 "name": "BaseBdev2", 00:09:56.902 "uuid": "c0f8961f-77c6-47da-85ff-6fea43436736", 00:09:56.902 "is_configured": true, 00:09:56.902 "data_offset": 2048, 00:09:56.902 "data_size": 63488 00:09:56.902 }, 00:09:56.902 { 00:09:56.902 "name": "BaseBdev3", 00:09:56.902 "uuid": "4eba9548-ab9d-47ec-8043-30f6ee6ab2fb", 00:09:56.902 "is_configured": true, 00:09:56.902 "data_offset": 2048, 00:09:56.902 "data_size": 63488 00:09:56.902 } 00:09:56.902 ] 00:09:56.902 }' 00:09:56.902 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.902 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.467 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:57.467 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:57.467 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:57.467 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:57.467 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:57.467 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:57.467 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:57.467 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.467 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.467 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:57.467 [2024-10-09 03:12:40.565282] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:57.467 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.467 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:57.467 "name": "Existed_Raid", 00:09:57.467 "aliases": [ 00:09:57.467 "e0a93f55-0a38-4901-be7e-ee5fb24e806e" 00:09:57.467 ], 00:09:57.467 "product_name": "Raid Volume", 00:09:57.467 "block_size": 512, 00:09:57.467 "num_blocks": 190464, 00:09:57.467 "uuid": "e0a93f55-0a38-4901-be7e-ee5fb24e806e", 00:09:57.467 "assigned_rate_limits": { 00:09:57.467 "rw_ios_per_sec": 0, 00:09:57.467 "rw_mbytes_per_sec": 0, 00:09:57.467 "r_mbytes_per_sec": 0, 00:09:57.467 "w_mbytes_per_sec": 0 00:09:57.467 }, 00:09:57.467 "claimed": false, 00:09:57.467 "zoned": false, 00:09:57.467 "supported_io_types": { 00:09:57.467 "read": true, 00:09:57.467 "write": true, 00:09:57.467 "unmap": true, 00:09:57.467 "flush": true, 00:09:57.467 "reset": true, 00:09:57.467 "nvme_admin": false, 00:09:57.467 "nvme_io": false, 00:09:57.467 "nvme_io_md": false, 00:09:57.467 "write_zeroes": true, 00:09:57.467 "zcopy": false, 00:09:57.467 "get_zone_info": false, 00:09:57.467 "zone_management": false, 00:09:57.467 "zone_append": false, 00:09:57.467 "compare": false, 00:09:57.467 "compare_and_write": false, 00:09:57.467 "abort": false, 00:09:57.467 "seek_hole": false, 00:09:57.467 "seek_data": false, 00:09:57.467 "copy": false, 00:09:57.467 "nvme_iov_md": false 00:09:57.467 }, 00:09:57.467 "memory_domains": [ 00:09:57.467 { 00:09:57.467 "dma_device_id": "system", 00:09:57.467 "dma_device_type": 1 00:09:57.467 }, 00:09:57.467 { 00:09:57.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.467 "dma_device_type": 2 00:09:57.467 }, 00:09:57.467 { 00:09:57.467 "dma_device_id": "system", 00:09:57.467 "dma_device_type": 1 00:09:57.467 }, 00:09:57.467 { 00:09:57.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.467 "dma_device_type": 2 00:09:57.467 }, 00:09:57.467 { 00:09:57.467 "dma_device_id": "system", 00:09:57.467 "dma_device_type": 1 00:09:57.467 }, 00:09:57.467 { 00:09:57.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.467 "dma_device_type": 2 00:09:57.467 } 00:09:57.467 ], 00:09:57.467 "driver_specific": { 00:09:57.467 "raid": { 00:09:57.467 "uuid": "e0a93f55-0a38-4901-be7e-ee5fb24e806e", 00:09:57.467 "strip_size_kb": 64, 00:09:57.467 "state": "online", 00:09:57.467 "raid_level": "concat", 00:09:57.467 "superblock": true, 00:09:57.467 "num_base_bdevs": 3, 00:09:57.467 "num_base_bdevs_discovered": 3, 00:09:57.467 "num_base_bdevs_operational": 3, 00:09:57.467 "base_bdevs_list": [ 00:09:57.467 { 00:09:57.467 "name": "BaseBdev1", 00:09:57.467 "uuid": "b1511897-6e5e-42b9-9272-ca11d921ccbe", 00:09:57.467 "is_configured": true, 00:09:57.467 "data_offset": 2048, 00:09:57.467 "data_size": 63488 00:09:57.467 }, 00:09:57.467 { 00:09:57.467 "name": "BaseBdev2", 00:09:57.467 "uuid": "c0f8961f-77c6-47da-85ff-6fea43436736", 00:09:57.467 "is_configured": true, 00:09:57.467 "data_offset": 2048, 00:09:57.467 "data_size": 63488 00:09:57.467 }, 00:09:57.467 { 00:09:57.467 "name": "BaseBdev3", 00:09:57.467 "uuid": "4eba9548-ab9d-47ec-8043-30f6ee6ab2fb", 00:09:57.467 "is_configured": true, 00:09:57.467 "data_offset": 2048, 00:09:57.467 "data_size": 63488 00:09:57.467 } 00:09:57.467 ] 00:09:57.467 } 00:09:57.467 } 00:09:57.467 }' 00:09:57.467 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:57.467 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:57.467 BaseBdev2 00:09:57.467 BaseBdev3' 00:09:57.467 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.467 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:57.467 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.467 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:57.467 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.467 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.467 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.467 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.467 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.467 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.467 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.467 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.467 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:57.467 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.467 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.725 [2024-10-09 03:12:40.836511] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:57.725 [2024-10-09 03:12:40.836615] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:57.725 [2024-10-09 03:12:40.836712] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.725 "name": "Existed_Raid", 00:09:57.725 "uuid": "e0a93f55-0a38-4901-be7e-ee5fb24e806e", 00:09:57.725 "strip_size_kb": 64, 00:09:57.725 "state": "offline", 00:09:57.725 "raid_level": "concat", 00:09:57.725 "superblock": true, 00:09:57.725 "num_base_bdevs": 3, 00:09:57.725 "num_base_bdevs_discovered": 2, 00:09:57.725 "num_base_bdevs_operational": 2, 00:09:57.725 "base_bdevs_list": [ 00:09:57.725 { 00:09:57.725 "name": null, 00:09:57.725 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.725 "is_configured": false, 00:09:57.725 "data_offset": 0, 00:09:57.725 "data_size": 63488 00:09:57.725 }, 00:09:57.725 { 00:09:57.725 "name": "BaseBdev2", 00:09:57.725 "uuid": "c0f8961f-77c6-47da-85ff-6fea43436736", 00:09:57.725 "is_configured": true, 00:09:57.725 "data_offset": 2048, 00:09:57.725 "data_size": 63488 00:09:57.725 }, 00:09:57.725 { 00:09:57.725 "name": "BaseBdev3", 00:09:57.725 "uuid": "4eba9548-ab9d-47ec-8043-30f6ee6ab2fb", 00:09:57.725 "is_configured": true, 00:09:57.725 "data_offset": 2048, 00:09:57.725 "data_size": 63488 00:09:57.725 } 00:09:57.725 ] 00:09:57.725 }' 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.725 03:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.291 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:58.291 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:58.291 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:58.291 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.291 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.291 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.291 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.291 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:58.291 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:58.291 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:58.291 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.291 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.291 [2024-10-09 03:12:41.423501] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:58.291 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.291 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:58.291 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:58.291 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.291 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.291 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.291 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:58.291 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.291 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:58.291 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:58.291 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:58.291 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.291 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.291 [2024-10-09 03:12:41.592233] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:58.291 [2024-10-09 03:12:41.592316] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:58.550 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.550 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:58.550 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:58.550 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.550 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:58.550 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.550 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.550 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.550 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:58.550 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:58.550 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:58.550 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:58.550 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:58.550 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:58.550 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.550 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.550 BaseBdev2 00:09:58.550 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.550 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:58.550 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:09:58.550 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:58.550 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:58.550 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:58.550 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:58.550 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:58.550 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.550 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.550 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.550 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:58.550 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.550 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.550 [ 00:09:58.550 { 00:09:58.550 "name": "BaseBdev2", 00:09:58.550 "aliases": [ 00:09:58.550 "862730eb-c727-4c9c-9814-4b5490a92123" 00:09:58.550 ], 00:09:58.550 "product_name": "Malloc disk", 00:09:58.550 "block_size": 512, 00:09:58.550 "num_blocks": 65536, 00:09:58.550 "uuid": "862730eb-c727-4c9c-9814-4b5490a92123", 00:09:58.550 "assigned_rate_limits": { 00:09:58.550 "rw_ios_per_sec": 0, 00:09:58.550 "rw_mbytes_per_sec": 0, 00:09:58.550 "r_mbytes_per_sec": 0, 00:09:58.550 "w_mbytes_per_sec": 0 00:09:58.550 }, 00:09:58.550 "claimed": false, 00:09:58.550 "zoned": false, 00:09:58.550 "supported_io_types": { 00:09:58.550 "read": true, 00:09:58.550 "write": true, 00:09:58.550 "unmap": true, 00:09:58.550 "flush": true, 00:09:58.550 "reset": true, 00:09:58.550 "nvme_admin": false, 00:09:58.550 "nvme_io": false, 00:09:58.550 "nvme_io_md": false, 00:09:58.550 "write_zeroes": true, 00:09:58.550 "zcopy": true, 00:09:58.550 "get_zone_info": false, 00:09:58.550 "zone_management": false, 00:09:58.550 "zone_append": false, 00:09:58.550 "compare": false, 00:09:58.550 "compare_and_write": false, 00:09:58.551 "abort": true, 00:09:58.551 "seek_hole": false, 00:09:58.551 "seek_data": false, 00:09:58.551 "copy": true, 00:09:58.551 "nvme_iov_md": false 00:09:58.551 }, 00:09:58.551 "memory_domains": [ 00:09:58.551 { 00:09:58.551 "dma_device_id": "system", 00:09:58.551 "dma_device_type": 1 00:09:58.551 }, 00:09:58.551 { 00:09:58.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.551 "dma_device_type": 2 00:09:58.551 } 00:09:58.551 ], 00:09:58.551 "driver_specific": {} 00:09:58.551 } 00:09:58.551 ] 00:09:58.551 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.551 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:58.551 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:58.551 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:58.551 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:58.551 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.551 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.810 BaseBdev3 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.810 [ 00:09:58.810 { 00:09:58.810 "name": "BaseBdev3", 00:09:58.810 "aliases": [ 00:09:58.810 "b9ee5f85-5674-4354-bdec-cb7b7ecf6b2e" 00:09:58.810 ], 00:09:58.810 "product_name": "Malloc disk", 00:09:58.810 "block_size": 512, 00:09:58.810 "num_blocks": 65536, 00:09:58.810 "uuid": "b9ee5f85-5674-4354-bdec-cb7b7ecf6b2e", 00:09:58.810 "assigned_rate_limits": { 00:09:58.810 "rw_ios_per_sec": 0, 00:09:58.810 "rw_mbytes_per_sec": 0, 00:09:58.810 "r_mbytes_per_sec": 0, 00:09:58.810 "w_mbytes_per_sec": 0 00:09:58.810 }, 00:09:58.810 "claimed": false, 00:09:58.810 "zoned": false, 00:09:58.810 "supported_io_types": { 00:09:58.810 "read": true, 00:09:58.810 "write": true, 00:09:58.810 "unmap": true, 00:09:58.810 "flush": true, 00:09:58.810 "reset": true, 00:09:58.810 "nvme_admin": false, 00:09:58.810 "nvme_io": false, 00:09:58.810 "nvme_io_md": false, 00:09:58.810 "write_zeroes": true, 00:09:58.810 "zcopy": true, 00:09:58.810 "get_zone_info": false, 00:09:58.810 "zone_management": false, 00:09:58.810 "zone_append": false, 00:09:58.810 "compare": false, 00:09:58.810 "compare_and_write": false, 00:09:58.810 "abort": true, 00:09:58.810 "seek_hole": false, 00:09:58.810 "seek_data": false, 00:09:58.810 "copy": true, 00:09:58.810 "nvme_iov_md": false 00:09:58.810 }, 00:09:58.810 "memory_domains": [ 00:09:58.810 { 00:09:58.810 "dma_device_id": "system", 00:09:58.810 "dma_device_type": 1 00:09:58.810 }, 00:09:58.810 { 00:09:58.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.810 "dma_device_type": 2 00:09:58.810 } 00:09:58.810 ], 00:09:58.810 "driver_specific": {} 00:09:58.810 } 00:09:58.810 ] 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.810 [2024-10-09 03:12:41.934219] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:58.810 [2024-10-09 03:12:41.934382] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:58.810 [2024-10-09 03:12:41.934442] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:58.810 [2024-10-09 03:12:41.936694] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.810 "name": "Existed_Raid", 00:09:58.810 "uuid": "3e10ed6e-f74b-4d4c-a840-d97346fe364f", 00:09:58.810 "strip_size_kb": 64, 00:09:58.810 "state": "configuring", 00:09:58.810 "raid_level": "concat", 00:09:58.810 "superblock": true, 00:09:58.810 "num_base_bdevs": 3, 00:09:58.810 "num_base_bdevs_discovered": 2, 00:09:58.810 "num_base_bdevs_operational": 3, 00:09:58.810 "base_bdevs_list": [ 00:09:58.810 { 00:09:58.810 "name": "BaseBdev1", 00:09:58.810 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.810 "is_configured": false, 00:09:58.810 "data_offset": 0, 00:09:58.810 "data_size": 0 00:09:58.810 }, 00:09:58.810 { 00:09:58.810 "name": "BaseBdev2", 00:09:58.810 "uuid": "862730eb-c727-4c9c-9814-4b5490a92123", 00:09:58.810 "is_configured": true, 00:09:58.810 "data_offset": 2048, 00:09:58.810 "data_size": 63488 00:09:58.810 }, 00:09:58.810 { 00:09:58.810 "name": "BaseBdev3", 00:09:58.810 "uuid": "b9ee5f85-5674-4354-bdec-cb7b7ecf6b2e", 00:09:58.810 "is_configured": true, 00:09:58.810 "data_offset": 2048, 00:09:58.810 "data_size": 63488 00:09:58.810 } 00:09:58.810 ] 00:09:58.810 }' 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.810 03:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.068 03:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:59.068 03:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.068 03:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.068 [2024-10-09 03:12:42.349433] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:59.068 03:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.068 03:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:59.068 03:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.068 03:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.069 03:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:59.069 03:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.069 03:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.069 03:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.069 03:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.069 03:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.069 03:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.069 03:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.069 03:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.069 03:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.069 03:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.327 03:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.327 03:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.327 "name": "Existed_Raid", 00:09:59.327 "uuid": "3e10ed6e-f74b-4d4c-a840-d97346fe364f", 00:09:59.327 "strip_size_kb": 64, 00:09:59.327 "state": "configuring", 00:09:59.327 "raid_level": "concat", 00:09:59.327 "superblock": true, 00:09:59.327 "num_base_bdevs": 3, 00:09:59.327 "num_base_bdevs_discovered": 1, 00:09:59.327 "num_base_bdevs_operational": 3, 00:09:59.327 "base_bdevs_list": [ 00:09:59.327 { 00:09:59.327 "name": "BaseBdev1", 00:09:59.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.327 "is_configured": false, 00:09:59.327 "data_offset": 0, 00:09:59.327 "data_size": 0 00:09:59.327 }, 00:09:59.327 { 00:09:59.327 "name": null, 00:09:59.327 "uuid": "862730eb-c727-4c9c-9814-4b5490a92123", 00:09:59.327 "is_configured": false, 00:09:59.327 "data_offset": 0, 00:09:59.327 "data_size": 63488 00:09:59.327 }, 00:09:59.327 { 00:09:59.327 "name": "BaseBdev3", 00:09:59.327 "uuid": "b9ee5f85-5674-4354-bdec-cb7b7ecf6b2e", 00:09:59.327 "is_configured": true, 00:09:59.327 "data_offset": 2048, 00:09:59.327 "data_size": 63488 00:09:59.327 } 00:09:59.327 ] 00:09:59.327 }' 00:09:59.327 03:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.327 03:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.585 03:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.585 03:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.585 03:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.585 03:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:59.585 03:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.585 03:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:59.585 03:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:59.585 03:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.585 03:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.585 [2024-10-09 03:12:42.851343] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.585 BaseBdev1 00:09:59.585 03:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.585 03:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:59.585 03:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:09:59.585 03:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:59.585 03:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:09:59.585 03:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:59.585 03:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:59.585 03:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:09:59.585 03:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.585 03:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.585 03:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.585 03:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:59.585 03:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.585 03:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.585 [ 00:09:59.585 { 00:09:59.585 "name": "BaseBdev1", 00:09:59.585 "aliases": [ 00:09:59.585 "c234fcda-fc50-4349-bd6e-d564a42d81f5" 00:09:59.585 ], 00:09:59.585 "product_name": "Malloc disk", 00:09:59.585 "block_size": 512, 00:09:59.585 "num_blocks": 65536, 00:09:59.585 "uuid": "c234fcda-fc50-4349-bd6e-d564a42d81f5", 00:09:59.585 "assigned_rate_limits": { 00:09:59.585 "rw_ios_per_sec": 0, 00:09:59.585 "rw_mbytes_per_sec": 0, 00:09:59.585 "r_mbytes_per_sec": 0, 00:09:59.585 "w_mbytes_per_sec": 0 00:09:59.585 }, 00:09:59.585 "claimed": true, 00:09:59.585 "claim_type": "exclusive_write", 00:09:59.585 "zoned": false, 00:09:59.585 "supported_io_types": { 00:09:59.585 "read": true, 00:09:59.585 "write": true, 00:09:59.585 "unmap": true, 00:09:59.585 "flush": true, 00:09:59.585 "reset": true, 00:09:59.585 "nvme_admin": false, 00:09:59.585 "nvme_io": false, 00:09:59.585 "nvme_io_md": false, 00:09:59.585 "write_zeroes": true, 00:09:59.585 "zcopy": true, 00:09:59.585 "get_zone_info": false, 00:09:59.585 "zone_management": false, 00:09:59.585 "zone_append": false, 00:09:59.585 "compare": false, 00:09:59.585 "compare_and_write": false, 00:09:59.585 "abort": true, 00:09:59.585 "seek_hole": false, 00:09:59.585 "seek_data": false, 00:09:59.585 "copy": true, 00:09:59.585 "nvme_iov_md": false 00:09:59.585 }, 00:09:59.585 "memory_domains": [ 00:09:59.585 { 00:09:59.585 "dma_device_id": "system", 00:09:59.585 "dma_device_type": 1 00:09:59.585 }, 00:09:59.585 { 00:09:59.585 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.585 "dma_device_type": 2 00:09:59.585 } 00:09:59.585 ], 00:09:59.585 "driver_specific": {} 00:09:59.585 } 00:09:59.585 ] 00:09:59.843 03:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.843 03:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:09:59.843 03:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:09:59.843 03:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.843 03:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.843 03:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:59.843 03:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.843 03:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.843 03:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.843 03:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.843 03:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.843 03:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.843 03:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.843 03:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.843 03:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.843 03:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.843 03:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.843 03:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.843 "name": "Existed_Raid", 00:09:59.843 "uuid": "3e10ed6e-f74b-4d4c-a840-d97346fe364f", 00:09:59.843 "strip_size_kb": 64, 00:09:59.843 "state": "configuring", 00:09:59.843 "raid_level": "concat", 00:09:59.843 "superblock": true, 00:09:59.843 "num_base_bdevs": 3, 00:09:59.843 "num_base_bdevs_discovered": 2, 00:09:59.843 "num_base_bdevs_operational": 3, 00:09:59.843 "base_bdevs_list": [ 00:09:59.843 { 00:09:59.843 "name": "BaseBdev1", 00:09:59.843 "uuid": "c234fcda-fc50-4349-bd6e-d564a42d81f5", 00:09:59.843 "is_configured": true, 00:09:59.843 "data_offset": 2048, 00:09:59.843 "data_size": 63488 00:09:59.843 }, 00:09:59.843 { 00:09:59.843 "name": null, 00:09:59.843 "uuid": "862730eb-c727-4c9c-9814-4b5490a92123", 00:09:59.843 "is_configured": false, 00:09:59.843 "data_offset": 0, 00:09:59.843 "data_size": 63488 00:09:59.843 }, 00:09:59.843 { 00:09:59.843 "name": "BaseBdev3", 00:09:59.843 "uuid": "b9ee5f85-5674-4354-bdec-cb7b7ecf6b2e", 00:09:59.843 "is_configured": true, 00:09:59.843 "data_offset": 2048, 00:09:59.843 "data_size": 63488 00:09:59.843 } 00:09:59.843 ] 00:09:59.843 }' 00:09:59.843 03:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.843 03:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.102 03:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.102 03:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.102 03:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.102 03:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:00.102 03:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.102 03:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:00.102 03:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:00.102 03:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.102 03:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.102 [2024-10-09 03:12:43.338644] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:00.102 03:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.102 03:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:00.102 03:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.102 03:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.102 03:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:00.102 03:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.102 03:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.102 03:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.102 03:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.102 03:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.102 03:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.102 03:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.102 03:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.102 03:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.102 03:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.102 03:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.102 03:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.102 "name": "Existed_Raid", 00:10:00.102 "uuid": "3e10ed6e-f74b-4d4c-a840-d97346fe364f", 00:10:00.102 "strip_size_kb": 64, 00:10:00.102 "state": "configuring", 00:10:00.102 "raid_level": "concat", 00:10:00.102 "superblock": true, 00:10:00.102 "num_base_bdevs": 3, 00:10:00.102 "num_base_bdevs_discovered": 1, 00:10:00.102 "num_base_bdevs_operational": 3, 00:10:00.102 "base_bdevs_list": [ 00:10:00.102 { 00:10:00.102 "name": "BaseBdev1", 00:10:00.102 "uuid": "c234fcda-fc50-4349-bd6e-d564a42d81f5", 00:10:00.102 "is_configured": true, 00:10:00.102 "data_offset": 2048, 00:10:00.102 "data_size": 63488 00:10:00.102 }, 00:10:00.102 { 00:10:00.102 "name": null, 00:10:00.102 "uuid": "862730eb-c727-4c9c-9814-4b5490a92123", 00:10:00.102 "is_configured": false, 00:10:00.102 "data_offset": 0, 00:10:00.102 "data_size": 63488 00:10:00.102 }, 00:10:00.102 { 00:10:00.102 "name": null, 00:10:00.102 "uuid": "b9ee5f85-5674-4354-bdec-cb7b7ecf6b2e", 00:10:00.102 "is_configured": false, 00:10:00.102 "data_offset": 0, 00:10:00.102 "data_size": 63488 00:10:00.102 } 00:10:00.102 ] 00:10:00.102 }' 00:10:00.102 03:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.102 03:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.670 03:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.670 03:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.670 03:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.670 03:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:00.670 03:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.670 03:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:00.670 03:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:00.670 03:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.670 03:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.670 [2024-10-09 03:12:43.857683] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:00.670 03:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.670 03:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:00.670 03:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.670 03:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.670 03:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:00.670 03:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.670 03:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.670 03:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.670 03:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.670 03:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.670 03:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.670 03:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.670 03:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.670 03:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:00.670 03:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:00.670 03:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:00.670 03:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.670 "name": "Existed_Raid", 00:10:00.670 "uuid": "3e10ed6e-f74b-4d4c-a840-d97346fe364f", 00:10:00.670 "strip_size_kb": 64, 00:10:00.670 "state": "configuring", 00:10:00.670 "raid_level": "concat", 00:10:00.670 "superblock": true, 00:10:00.670 "num_base_bdevs": 3, 00:10:00.670 "num_base_bdevs_discovered": 2, 00:10:00.670 "num_base_bdevs_operational": 3, 00:10:00.670 "base_bdevs_list": [ 00:10:00.670 { 00:10:00.670 "name": "BaseBdev1", 00:10:00.670 "uuid": "c234fcda-fc50-4349-bd6e-d564a42d81f5", 00:10:00.670 "is_configured": true, 00:10:00.670 "data_offset": 2048, 00:10:00.670 "data_size": 63488 00:10:00.670 }, 00:10:00.670 { 00:10:00.670 "name": null, 00:10:00.670 "uuid": "862730eb-c727-4c9c-9814-4b5490a92123", 00:10:00.670 "is_configured": false, 00:10:00.670 "data_offset": 0, 00:10:00.670 "data_size": 63488 00:10:00.670 }, 00:10:00.670 { 00:10:00.670 "name": "BaseBdev3", 00:10:00.670 "uuid": "b9ee5f85-5674-4354-bdec-cb7b7ecf6b2e", 00:10:00.670 "is_configured": true, 00:10:00.670 "data_offset": 2048, 00:10:00.670 "data_size": 63488 00:10:00.670 } 00:10:00.670 ] 00:10:00.670 }' 00:10:00.670 03:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.670 03:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.236 03:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.236 03:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:01.236 03:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.236 03:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.236 03:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.236 03:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:01.236 03:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:01.236 03:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.236 03:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.236 [2024-10-09 03:12:44.356925] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:01.236 03:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.236 03:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:01.236 03:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.236 03:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.236 03:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:01.237 03:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.237 03:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.237 03:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.237 03:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.237 03:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.237 03:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.237 03:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.237 03:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.237 03:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.237 03:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.237 03:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.237 03:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.237 "name": "Existed_Raid", 00:10:01.237 "uuid": "3e10ed6e-f74b-4d4c-a840-d97346fe364f", 00:10:01.237 "strip_size_kb": 64, 00:10:01.237 "state": "configuring", 00:10:01.237 "raid_level": "concat", 00:10:01.237 "superblock": true, 00:10:01.237 "num_base_bdevs": 3, 00:10:01.237 "num_base_bdevs_discovered": 1, 00:10:01.237 "num_base_bdevs_operational": 3, 00:10:01.237 "base_bdevs_list": [ 00:10:01.237 { 00:10:01.237 "name": null, 00:10:01.237 "uuid": "c234fcda-fc50-4349-bd6e-d564a42d81f5", 00:10:01.237 "is_configured": false, 00:10:01.237 "data_offset": 0, 00:10:01.237 "data_size": 63488 00:10:01.237 }, 00:10:01.237 { 00:10:01.237 "name": null, 00:10:01.237 "uuid": "862730eb-c727-4c9c-9814-4b5490a92123", 00:10:01.237 "is_configured": false, 00:10:01.237 "data_offset": 0, 00:10:01.237 "data_size": 63488 00:10:01.237 }, 00:10:01.237 { 00:10:01.237 "name": "BaseBdev3", 00:10:01.237 "uuid": "b9ee5f85-5674-4354-bdec-cb7b7ecf6b2e", 00:10:01.237 "is_configured": true, 00:10:01.237 "data_offset": 2048, 00:10:01.237 "data_size": 63488 00:10:01.237 } 00:10:01.237 ] 00:10:01.237 }' 00:10:01.237 03:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.237 03:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.804 03:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.804 03:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:01.804 03:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.804 03:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.804 03:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.804 03:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:01.804 03:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:01.804 03:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.804 03:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.805 [2024-10-09 03:12:44.911535] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:01.805 03:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.805 03:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:01.805 03:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.805 03:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.805 03:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:01.805 03:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.805 03:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.805 03:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.805 03:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.805 03:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.805 03:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.805 03:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.805 03:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:01.805 03:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:01.805 03:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.805 03:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:01.805 03:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.805 "name": "Existed_Raid", 00:10:01.805 "uuid": "3e10ed6e-f74b-4d4c-a840-d97346fe364f", 00:10:01.805 "strip_size_kb": 64, 00:10:01.805 "state": "configuring", 00:10:01.805 "raid_level": "concat", 00:10:01.805 "superblock": true, 00:10:01.805 "num_base_bdevs": 3, 00:10:01.805 "num_base_bdevs_discovered": 2, 00:10:01.805 "num_base_bdevs_operational": 3, 00:10:01.805 "base_bdevs_list": [ 00:10:01.805 { 00:10:01.805 "name": null, 00:10:01.805 "uuid": "c234fcda-fc50-4349-bd6e-d564a42d81f5", 00:10:01.805 "is_configured": false, 00:10:01.805 "data_offset": 0, 00:10:01.805 "data_size": 63488 00:10:01.805 }, 00:10:01.805 { 00:10:01.805 "name": "BaseBdev2", 00:10:01.805 "uuid": "862730eb-c727-4c9c-9814-4b5490a92123", 00:10:01.805 "is_configured": true, 00:10:01.805 "data_offset": 2048, 00:10:01.805 "data_size": 63488 00:10:01.805 }, 00:10:01.805 { 00:10:01.805 "name": "BaseBdev3", 00:10:01.805 "uuid": "b9ee5f85-5674-4354-bdec-cb7b7ecf6b2e", 00:10:01.805 "is_configured": true, 00:10:01.805 "data_offset": 2048, 00:10:01.805 "data_size": 63488 00:10:01.805 } 00:10:01.805 ] 00:10:01.805 }' 00:10:01.805 03:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.805 03:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.065 03:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.065 03:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:02.065 03:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.065 03:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.065 03:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.065 03:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:02.325 03:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:02.325 03:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.325 03:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.325 03:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.325 03:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.325 03:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u c234fcda-fc50-4349-bd6e-d564a42d81f5 00:10:02.325 03:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.325 03:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.325 [2024-10-09 03:12:45.447364] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:02.325 [2024-10-09 03:12:45.447671] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:02.325 [2024-10-09 03:12:45.447699] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:02.325 [2024-10-09 03:12:45.448000] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:02.325 [2024-10-09 03:12:45.448158] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:02.325 [2024-10-09 03:12:45.448168] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:02.325 NewBaseBdev 00:10:02.325 [2024-10-09 03:12:45.448322] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:02.325 03:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.326 03:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:02.326 03:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:02.326 03:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:02.326 03:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:02.326 03:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:02.326 03:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:02.326 03:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:02.326 03:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.326 03:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.326 03:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.326 03:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:02.326 03:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.326 03:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.326 [ 00:10:02.326 { 00:10:02.326 "name": "NewBaseBdev", 00:10:02.326 "aliases": [ 00:10:02.326 "c234fcda-fc50-4349-bd6e-d564a42d81f5" 00:10:02.326 ], 00:10:02.326 "product_name": "Malloc disk", 00:10:02.326 "block_size": 512, 00:10:02.326 "num_blocks": 65536, 00:10:02.326 "uuid": "c234fcda-fc50-4349-bd6e-d564a42d81f5", 00:10:02.326 "assigned_rate_limits": { 00:10:02.326 "rw_ios_per_sec": 0, 00:10:02.326 "rw_mbytes_per_sec": 0, 00:10:02.326 "r_mbytes_per_sec": 0, 00:10:02.326 "w_mbytes_per_sec": 0 00:10:02.326 }, 00:10:02.326 "claimed": true, 00:10:02.326 "claim_type": "exclusive_write", 00:10:02.326 "zoned": false, 00:10:02.326 "supported_io_types": { 00:10:02.326 "read": true, 00:10:02.326 "write": true, 00:10:02.326 "unmap": true, 00:10:02.326 "flush": true, 00:10:02.326 "reset": true, 00:10:02.326 "nvme_admin": false, 00:10:02.326 "nvme_io": false, 00:10:02.326 "nvme_io_md": false, 00:10:02.326 "write_zeroes": true, 00:10:02.326 "zcopy": true, 00:10:02.326 "get_zone_info": false, 00:10:02.326 "zone_management": false, 00:10:02.326 "zone_append": false, 00:10:02.326 "compare": false, 00:10:02.326 "compare_and_write": false, 00:10:02.326 "abort": true, 00:10:02.326 "seek_hole": false, 00:10:02.326 "seek_data": false, 00:10:02.326 "copy": true, 00:10:02.326 "nvme_iov_md": false 00:10:02.326 }, 00:10:02.326 "memory_domains": [ 00:10:02.326 { 00:10:02.326 "dma_device_id": "system", 00:10:02.326 "dma_device_type": 1 00:10:02.326 }, 00:10:02.326 { 00:10:02.326 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.326 "dma_device_type": 2 00:10:02.326 } 00:10:02.326 ], 00:10:02.326 "driver_specific": {} 00:10:02.326 } 00:10:02.326 ] 00:10:02.326 03:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.326 03:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:02.326 03:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:02.326 03:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.326 03:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:02.326 03:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:02.326 03:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.326 03:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.326 03:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.326 03:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.326 03:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.326 03:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.326 03:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.326 03:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.326 03:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.326 03:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.326 03:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.326 03:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.326 "name": "Existed_Raid", 00:10:02.326 "uuid": "3e10ed6e-f74b-4d4c-a840-d97346fe364f", 00:10:02.326 "strip_size_kb": 64, 00:10:02.326 "state": "online", 00:10:02.326 "raid_level": "concat", 00:10:02.326 "superblock": true, 00:10:02.326 "num_base_bdevs": 3, 00:10:02.326 "num_base_bdevs_discovered": 3, 00:10:02.326 "num_base_bdevs_operational": 3, 00:10:02.326 "base_bdevs_list": [ 00:10:02.326 { 00:10:02.326 "name": "NewBaseBdev", 00:10:02.326 "uuid": "c234fcda-fc50-4349-bd6e-d564a42d81f5", 00:10:02.326 "is_configured": true, 00:10:02.326 "data_offset": 2048, 00:10:02.326 "data_size": 63488 00:10:02.326 }, 00:10:02.326 { 00:10:02.326 "name": "BaseBdev2", 00:10:02.326 "uuid": "862730eb-c727-4c9c-9814-4b5490a92123", 00:10:02.326 "is_configured": true, 00:10:02.326 "data_offset": 2048, 00:10:02.326 "data_size": 63488 00:10:02.326 }, 00:10:02.326 { 00:10:02.326 "name": "BaseBdev3", 00:10:02.326 "uuid": "b9ee5f85-5674-4354-bdec-cb7b7ecf6b2e", 00:10:02.326 "is_configured": true, 00:10:02.326 "data_offset": 2048, 00:10:02.326 "data_size": 63488 00:10:02.326 } 00:10:02.326 ] 00:10:02.326 }' 00:10:02.326 03:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.326 03:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.894 03:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:02.894 03:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:02.894 03:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:02.894 03:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:02.894 03:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:02.894 03:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:02.894 03:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:02.894 03:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:02.894 03:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.894 03:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.894 [2024-10-09 03:12:45.946942] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:02.894 03:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.894 03:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:02.894 "name": "Existed_Raid", 00:10:02.894 "aliases": [ 00:10:02.894 "3e10ed6e-f74b-4d4c-a840-d97346fe364f" 00:10:02.894 ], 00:10:02.894 "product_name": "Raid Volume", 00:10:02.894 "block_size": 512, 00:10:02.894 "num_blocks": 190464, 00:10:02.894 "uuid": "3e10ed6e-f74b-4d4c-a840-d97346fe364f", 00:10:02.894 "assigned_rate_limits": { 00:10:02.894 "rw_ios_per_sec": 0, 00:10:02.894 "rw_mbytes_per_sec": 0, 00:10:02.894 "r_mbytes_per_sec": 0, 00:10:02.894 "w_mbytes_per_sec": 0 00:10:02.894 }, 00:10:02.894 "claimed": false, 00:10:02.894 "zoned": false, 00:10:02.894 "supported_io_types": { 00:10:02.894 "read": true, 00:10:02.894 "write": true, 00:10:02.894 "unmap": true, 00:10:02.894 "flush": true, 00:10:02.894 "reset": true, 00:10:02.894 "nvme_admin": false, 00:10:02.894 "nvme_io": false, 00:10:02.895 "nvme_io_md": false, 00:10:02.895 "write_zeroes": true, 00:10:02.895 "zcopy": false, 00:10:02.895 "get_zone_info": false, 00:10:02.895 "zone_management": false, 00:10:02.895 "zone_append": false, 00:10:02.895 "compare": false, 00:10:02.895 "compare_and_write": false, 00:10:02.895 "abort": false, 00:10:02.895 "seek_hole": false, 00:10:02.895 "seek_data": false, 00:10:02.895 "copy": false, 00:10:02.895 "nvme_iov_md": false 00:10:02.895 }, 00:10:02.895 "memory_domains": [ 00:10:02.895 { 00:10:02.895 "dma_device_id": "system", 00:10:02.895 "dma_device_type": 1 00:10:02.895 }, 00:10:02.895 { 00:10:02.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.895 "dma_device_type": 2 00:10:02.895 }, 00:10:02.895 { 00:10:02.895 "dma_device_id": "system", 00:10:02.895 "dma_device_type": 1 00:10:02.895 }, 00:10:02.895 { 00:10:02.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.895 "dma_device_type": 2 00:10:02.895 }, 00:10:02.895 { 00:10:02.895 "dma_device_id": "system", 00:10:02.895 "dma_device_type": 1 00:10:02.895 }, 00:10:02.895 { 00:10:02.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.895 "dma_device_type": 2 00:10:02.895 } 00:10:02.895 ], 00:10:02.895 "driver_specific": { 00:10:02.895 "raid": { 00:10:02.895 "uuid": "3e10ed6e-f74b-4d4c-a840-d97346fe364f", 00:10:02.895 "strip_size_kb": 64, 00:10:02.895 "state": "online", 00:10:02.895 "raid_level": "concat", 00:10:02.895 "superblock": true, 00:10:02.895 "num_base_bdevs": 3, 00:10:02.895 "num_base_bdevs_discovered": 3, 00:10:02.895 "num_base_bdevs_operational": 3, 00:10:02.895 "base_bdevs_list": [ 00:10:02.895 { 00:10:02.895 "name": "NewBaseBdev", 00:10:02.895 "uuid": "c234fcda-fc50-4349-bd6e-d564a42d81f5", 00:10:02.895 "is_configured": true, 00:10:02.895 "data_offset": 2048, 00:10:02.895 "data_size": 63488 00:10:02.895 }, 00:10:02.895 { 00:10:02.895 "name": "BaseBdev2", 00:10:02.895 "uuid": "862730eb-c727-4c9c-9814-4b5490a92123", 00:10:02.895 "is_configured": true, 00:10:02.895 "data_offset": 2048, 00:10:02.895 "data_size": 63488 00:10:02.895 }, 00:10:02.895 { 00:10:02.895 "name": "BaseBdev3", 00:10:02.895 "uuid": "b9ee5f85-5674-4354-bdec-cb7b7ecf6b2e", 00:10:02.895 "is_configured": true, 00:10:02.895 "data_offset": 2048, 00:10:02.895 "data_size": 63488 00:10:02.895 } 00:10:02.895 ] 00:10:02.895 } 00:10:02.895 } 00:10:02.895 }' 00:10:02.895 03:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:02.895 03:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:02.895 BaseBdev2 00:10:02.895 BaseBdev3' 00:10:02.895 03:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.895 03:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:02.895 03:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.895 03:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:02.895 03:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.895 03:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.895 03:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.895 03:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.895 03:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.895 03:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.895 03:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.895 03:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:02.895 03:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.895 03:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.895 03:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.895 03:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.895 03:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.895 03:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.895 03:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:02.895 03:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:02.895 03:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:02.895 03:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.895 03:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.895 03:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.895 03:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:02.895 03:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:02.895 03:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:02.895 03:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:02.895 03:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:02.895 [2024-10-09 03:12:46.190109] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:02.895 [2024-10-09 03:12:46.190137] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:02.895 [2024-10-09 03:12:46.190219] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:02.895 [2024-10-09 03:12:46.190279] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:02.895 [2024-10-09 03:12:46.190292] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:02.895 03:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:02.895 03:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66353 00:10:02.895 03:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 66353 ']' 00:10:02.895 03:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 66353 00:10:03.154 03:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:03.154 03:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:03.154 03:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66353 00:10:03.154 killing process with pid 66353 00:10:03.154 03:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:03.154 03:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:03.154 03:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66353' 00:10:03.154 03:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 66353 00:10:03.154 [2024-10-09 03:12:46.239664] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:03.154 03:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 66353 00:10:03.413 [2024-10-09 03:12:46.550769] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:04.791 03:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:04.791 ************************************ 00:10:04.791 END TEST raid_state_function_test_sb 00:10:04.791 ************************************ 00:10:04.791 00:10:04.791 real 0m10.516s 00:10:04.791 user 0m16.253s 00:10:04.791 sys 0m1.970s 00:10:04.791 03:12:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:04.791 03:12:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:04.791 03:12:47 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:10:04.791 03:12:47 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:04.791 03:12:47 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:04.791 03:12:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:04.791 ************************************ 00:10:04.791 START TEST raid_superblock_test 00:10:04.791 ************************************ 00:10:04.791 03:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 3 00:10:04.791 03:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:04.791 03:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:04.791 03:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:04.791 03:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:04.791 03:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:04.791 03:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:04.791 03:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:04.791 03:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:04.792 03:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:04.792 03:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:04.792 03:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:04.792 03:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:04.792 03:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:04.792 03:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:04.792 03:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:04.792 03:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:04.792 03:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66973 00:10:04.792 03:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:04.792 03:12:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66973 00:10:04.792 03:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 66973 ']' 00:10:04.792 03:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:04.792 03:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:04.792 03:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:04.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:04.792 03:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:04.792 03:12:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:04.792 [2024-10-09 03:12:48.036265] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:10:04.792 [2024-10-09 03:12:48.036460] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66973 ] 00:10:05.051 [2024-10-09 03:12:48.184767] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.309 [2024-10-09 03:12:48.421011] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:05.568 [2024-10-09 03:12:48.641458] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:05.568 [2024-10-09 03:12:48.641497] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:05.568 03:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:05.568 03:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:05.568 03:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:05.568 03:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:05.568 03:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:05.568 03:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:05.568 03:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:05.568 03:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:05.568 03:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:05.568 03:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:05.568 03:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:05.568 03:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.568 03:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.828 malloc1 00:10:05.828 03:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.828 03:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:05.828 03:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.828 03:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.828 [2024-10-09 03:12:48.915837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:05.828 [2024-10-09 03:12:48.916009] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.828 [2024-10-09 03:12:48.916057] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:05.828 [2024-10-09 03:12:48.916090] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.828 [2024-10-09 03:12:48.918596] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.828 [2024-10-09 03:12:48.918672] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:05.828 pt1 00:10:05.828 03:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.828 03:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:05.828 03:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:05.828 03:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:05.828 03:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:05.828 03:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:05.828 03:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:05.828 03:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:05.828 03:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:05.828 03:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:05.828 03:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.828 03:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.828 malloc2 00:10:05.828 03:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.828 03:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:05.828 03:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.828 03:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.828 [2024-10-09 03:12:48.994715] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:05.828 [2024-10-09 03:12:48.994781] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.828 [2024-10-09 03:12:48.994808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:05.828 [2024-10-09 03:12:48.994817] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.828 [2024-10-09 03:12:48.997182] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.828 [2024-10-09 03:12:48.997218] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:05.828 pt2 00:10:05.828 03:12:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.828 03:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:05.828 03:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:05.828 03:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:05.828 03:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:05.828 03:12:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:05.828 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:05.828 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:05.828 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:05.828 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:05.828 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.828 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.828 malloc3 00:10:05.828 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.828 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:05.828 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.828 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.828 [2024-10-09 03:12:49.060302] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:05.828 [2024-10-09 03:12:49.060358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:05.828 [2024-10-09 03:12:49.060397] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:05.828 [2024-10-09 03:12:49.060406] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:05.828 [2024-10-09 03:12:49.062714] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:05.828 [2024-10-09 03:12:49.062805] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:05.828 pt3 00:10:05.828 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.828 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:05.828 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:05.828 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:05.828 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.828 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.828 [2024-10-09 03:12:49.072355] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:05.828 [2024-10-09 03:12:49.074426] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:05.828 [2024-10-09 03:12:49.074498] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:05.828 [2024-10-09 03:12:49.074661] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:05.828 [2024-10-09 03:12:49.074674] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:05.828 [2024-10-09 03:12:49.074907] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:05.828 [2024-10-09 03:12:49.075095] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:05.828 [2024-10-09 03:12:49.075106] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:05.828 [2024-10-09 03:12:49.075255] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:05.828 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.829 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:05.829 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:05.829 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:05.829 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:05.829 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:05.829 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:05.829 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:05.829 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:05.829 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:05.829 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:05.829 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:05.829 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:05.829 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:05.829 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.829 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:05.829 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:05.829 "name": "raid_bdev1", 00:10:05.829 "uuid": "64676770-e38f-472a-bd14-931bb39f164c", 00:10:05.829 "strip_size_kb": 64, 00:10:05.829 "state": "online", 00:10:05.829 "raid_level": "concat", 00:10:05.829 "superblock": true, 00:10:05.829 "num_base_bdevs": 3, 00:10:05.829 "num_base_bdevs_discovered": 3, 00:10:05.829 "num_base_bdevs_operational": 3, 00:10:05.829 "base_bdevs_list": [ 00:10:05.829 { 00:10:05.829 "name": "pt1", 00:10:05.829 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:05.829 "is_configured": true, 00:10:05.829 "data_offset": 2048, 00:10:05.829 "data_size": 63488 00:10:05.829 }, 00:10:05.829 { 00:10:05.829 "name": "pt2", 00:10:05.829 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:05.829 "is_configured": true, 00:10:05.829 "data_offset": 2048, 00:10:05.829 "data_size": 63488 00:10:05.829 }, 00:10:05.829 { 00:10:05.829 "name": "pt3", 00:10:05.829 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:05.829 "is_configured": true, 00:10:05.829 "data_offset": 2048, 00:10:05.829 "data_size": 63488 00:10:05.829 } 00:10:05.829 ] 00:10:05.829 }' 00:10:06.088 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.088 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.347 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:06.347 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:06.347 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:06.347 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:06.347 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:06.347 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:06.347 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:06.347 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:06.347 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.347 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.347 [2024-10-09 03:12:49.547747] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:06.347 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.347 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:06.347 "name": "raid_bdev1", 00:10:06.347 "aliases": [ 00:10:06.347 "64676770-e38f-472a-bd14-931bb39f164c" 00:10:06.347 ], 00:10:06.347 "product_name": "Raid Volume", 00:10:06.347 "block_size": 512, 00:10:06.347 "num_blocks": 190464, 00:10:06.347 "uuid": "64676770-e38f-472a-bd14-931bb39f164c", 00:10:06.347 "assigned_rate_limits": { 00:10:06.347 "rw_ios_per_sec": 0, 00:10:06.347 "rw_mbytes_per_sec": 0, 00:10:06.347 "r_mbytes_per_sec": 0, 00:10:06.347 "w_mbytes_per_sec": 0 00:10:06.347 }, 00:10:06.347 "claimed": false, 00:10:06.347 "zoned": false, 00:10:06.347 "supported_io_types": { 00:10:06.347 "read": true, 00:10:06.347 "write": true, 00:10:06.347 "unmap": true, 00:10:06.347 "flush": true, 00:10:06.347 "reset": true, 00:10:06.347 "nvme_admin": false, 00:10:06.347 "nvme_io": false, 00:10:06.347 "nvme_io_md": false, 00:10:06.347 "write_zeroes": true, 00:10:06.347 "zcopy": false, 00:10:06.347 "get_zone_info": false, 00:10:06.347 "zone_management": false, 00:10:06.347 "zone_append": false, 00:10:06.347 "compare": false, 00:10:06.347 "compare_and_write": false, 00:10:06.347 "abort": false, 00:10:06.347 "seek_hole": false, 00:10:06.347 "seek_data": false, 00:10:06.347 "copy": false, 00:10:06.347 "nvme_iov_md": false 00:10:06.347 }, 00:10:06.347 "memory_domains": [ 00:10:06.347 { 00:10:06.347 "dma_device_id": "system", 00:10:06.347 "dma_device_type": 1 00:10:06.347 }, 00:10:06.347 { 00:10:06.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.347 "dma_device_type": 2 00:10:06.347 }, 00:10:06.347 { 00:10:06.347 "dma_device_id": "system", 00:10:06.347 "dma_device_type": 1 00:10:06.347 }, 00:10:06.347 { 00:10:06.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.347 "dma_device_type": 2 00:10:06.347 }, 00:10:06.347 { 00:10:06.347 "dma_device_id": "system", 00:10:06.347 "dma_device_type": 1 00:10:06.347 }, 00:10:06.347 { 00:10:06.347 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:06.347 "dma_device_type": 2 00:10:06.347 } 00:10:06.347 ], 00:10:06.347 "driver_specific": { 00:10:06.347 "raid": { 00:10:06.347 "uuid": "64676770-e38f-472a-bd14-931bb39f164c", 00:10:06.347 "strip_size_kb": 64, 00:10:06.347 "state": "online", 00:10:06.347 "raid_level": "concat", 00:10:06.347 "superblock": true, 00:10:06.347 "num_base_bdevs": 3, 00:10:06.347 "num_base_bdevs_discovered": 3, 00:10:06.347 "num_base_bdevs_operational": 3, 00:10:06.347 "base_bdevs_list": [ 00:10:06.347 { 00:10:06.347 "name": "pt1", 00:10:06.347 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:06.347 "is_configured": true, 00:10:06.347 "data_offset": 2048, 00:10:06.347 "data_size": 63488 00:10:06.347 }, 00:10:06.347 { 00:10:06.347 "name": "pt2", 00:10:06.347 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:06.347 "is_configured": true, 00:10:06.347 "data_offset": 2048, 00:10:06.347 "data_size": 63488 00:10:06.347 }, 00:10:06.347 { 00:10:06.347 "name": "pt3", 00:10:06.347 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:06.347 "is_configured": true, 00:10:06.347 "data_offset": 2048, 00:10:06.347 "data_size": 63488 00:10:06.347 } 00:10:06.347 ] 00:10:06.347 } 00:10:06.347 } 00:10:06.347 }' 00:10:06.347 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:06.347 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:06.347 pt2 00:10:06.347 pt3' 00:10:06.347 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.607 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:06.607 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.607 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:06.607 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.607 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.607 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.607 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.607 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.607 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.607 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.607 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.607 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:06.607 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.607 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.607 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.607 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.607 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.607 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:06.607 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:06.607 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:06.607 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.607 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.607 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.607 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:06.607 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:06.607 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:06.607 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.607 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.607 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:06.607 [2024-10-09 03:12:49.799236] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:06.607 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.607 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=64676770-e38f-472a-bd14-931bb39f164c 00:10:06.607 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 64676770-e38f-472a-bd14-931bb39f164c ']' 00:10:06.607 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:06.608 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.608 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.608 [2024-10-09 03:12:49.846941] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:06.608 [2024-10-09 03:12:49.847009] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:06.608 [2024-10-09 03:12:49.847093] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:06.608 [2024-10-09 03:12:49.847170] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:06.608 [2024-10-09 03:12:49.847205] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:06.608 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.608 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:06.608 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.608 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.608 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.608 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.608 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:06.608 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:06.608 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:06.608 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:06.608 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.608 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.868 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.868 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:06.868 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:06.868 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.868 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.868 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.868 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:06.868 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:06.868 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.868 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.868 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.868 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:06.868 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:06.868 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.868 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.868 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.868 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:06.868 03:12:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:06.868 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:06.868 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:06.868 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:06.868 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:06.868 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:06.868 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:06.868 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:06.868 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.868 03:12:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.868 [2024-10-09 03:12:49.994745] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:06.868 [2024-10-09 03:12:49.996885] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:06.868 [2024-10-09 03:12:49.996958] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:06.868 [2024-10-09 03:12:49.997009] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:06.868 [2024-10-09 03:12:49.997086] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:06.868 [2024-10-09 03:12:49.997106] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:06.868 [2024-10-09 03:12:49.997122] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:06.868 [2024-10-09 03:12:49.997131] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:06.868 request: 00:10:06.868 { 00:10:06.868 "name": "raid_bdev1", 00:10:06.868 "raid_level": "concat", 00:10:06.868 "base_bdevs": [ 00:10:06.868 "malloc1", 00:10:06.868 "malloc2", 00:10:06.868 "malloc3" 00:10:06.868 ], 00:10:06.868 "strip_size_kb": 64, 00:10:06.868 "superblock": false, 00:10:06.868 "method": "bdev_raid_create", 00:10:06.868 "req_id": 1 00:10:06.868 } 00:10:06.868 Got JSON-RPC error response 00:10:06.868 response: 00:10:06.868 { 00:10:06.868 "code": -17, 00:10:06.868 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:06.868 } 00:10:06.868 03:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:06.868 03:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:06.868 03:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:06.868 03:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:06.868 03:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:06.868 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.868 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:06.868 03:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.868 03:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.868 03:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.868 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:06.868 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:06.868 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:06.868 03:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.868 03:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.868 [2024-10-09 03:12:50.058600] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:06.868 [2024-10-09 03:12:50.058712] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:06.868 [2024-10-09 03:12:50.058750] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:06.868 [2024-10-09 03:12:50.058779] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:06.868 [2024-10-09 03:12:50.061281] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:06.868 [2024-10-09 03:12:50.061357] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:06.868 [2024-10-09 03:12:50.061465] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:06.868 [2024-10-09 03:12:50.061553] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:06.868 pt1 00:10:06.868 03:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.868 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:06.868 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:06.868 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.868 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:06.868 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.868 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.868 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.868 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.868 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.868 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.868 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:06.868 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.868 03:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:06.868 03:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.869 03:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:06.869 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.869 "name": "raid_bdev1", 00:10:06.869 "uuid": "64676770-e38f-472a-bd14-931bb39f164c", 00:10:06.869 "strip_size_kb": 64, 00:10:06.869 "state": "configuring", 00:10:06.869 "raid_level": "concat", 00:10:06.869 "superblock": true, 00:10:06.869 "num_base_bdevs": 3, 00:10:06.869 "num_base_bdevs_discovered": 1, 00:10:06.869 "num_base_bdevs_operational": 3, 00:10:06.869 "base_bdevs_list": [ 00:10:06.869 { 00:10:06.869 "name": "pt1", 00:10:06.869 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:06.869 "is_configured": true, 00:10:06.869 "data_offset": 2048, 00:10:06.869 "data_size": 63488 00:10:06.869 }, 00:10:06.869 { 00:10:06.869 "name": null, 00:10:06.869 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:06.869 "is_configured": false, 00:10:06.869 "data_offset": 2048, 00:10:06.869 "data_size": 63488 00:10:06.869 }, 00:10:06.869 { 00:10:06.869 "name": null, 00:10:06.869 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:06.869 "is_configured": false, 00:10:06.869 "data_offset": 2048, 00:10:06.869 "data_size": 63488 00:10:06.869 } 00:10:06.869 ] 00:10:06.869 }' 00:10:06.869 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.869 03:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.438 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:07.438 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:07.438 03:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.438 03:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.438 [2024-10-09 03:12:50.545753] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:07.438 [2024-10-09 03:12:50.545809] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.438 [2024-10-09 03:12:50.545832] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:07.438 [2024-10-09 03:12:50.545856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.438 [2024-10-09 03:12:50.546315] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.438 [2024-10-09 03:12:50.546346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:07.438 [2024-10-09 03:12:50.546425] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:07.438 [2024-10-09 03:12:50.546446] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:07.438 pt2 00:10:07.438 03:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.438 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:07.438 03:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.438 03:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.438 [2024-10-09 03:12:50.557767] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:07.438 03:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.438 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:07.438 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:07.438 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.438 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:07.438 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.438 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.438 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.438 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.438 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.438 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.438 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.438 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:07.438 03:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.438 03:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.438 03:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.438 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.438 "name": "raid_bdev1", 00:10:07.438 "uuid": "64676770-e38f-472a-bd14-931bb39f164c", 00:10:07.438 "strip_size_kb": 64, 00:10:07.438 "state": "configuring", 00:10:07.438 "raid_level": "concat", 00:10:07.438 "superblock": true, 00:10:07.438 "num_base_bdevs": 3, 00:10:07.438 "num_base_bdevs_discovered": 1, 00:10:07.438 "num_base_bdevs_operational": 3, 00:10:07.438 "base_bdevs_list": [ 00:10:07.438 { 00:10:07.438 "name": "pt1", 00:10:07.438 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:07.438 "is_configured": true, 00:10:07.438 "data_offset": 2048, 00:10:07.438 "data_size": 63488 00:10:07.438 }, 00:10:07.438 { 00:10:07.438 "name": null, 00:10:07.438 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:07.438 "is_configured": false, 00:10:07.438 "data_offset": 0, 00:10:07.438 "data_size": 63488 00:10:07.438 }, 00:10:07.438 { 00:10:07.438 "name": null, 00:10:07.438 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:07.438 "is_configured": false, 00:10:07.438 "data_offset": 2048, 00:10:07.438 "data_size": 63488 00:10:07.438 } 00:10:07.438 ] 00:10:07.438 }' 00:10:07.438 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.438 03:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.699 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:07.699 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:07.699 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:07.699 03:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.699 03:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.699 [2024-10-09 03:12:50.953037] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:07.699 [2024-10-09 03:12:50.953099] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.699 [2024-10-09 03:12:50.953116] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:07.699 [2024-10-09 03:12:50.953129] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.699 [2024-10-09 03:12:50.953570] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.699 [2024-10-09 03:12:50.953589] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:07.699 [2024-10-09 03:12:50.953662] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:07.699 [2024-10-09 03:12:50.953697] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:07.699 pt2 00:10:07.699 03:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.699 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:07.699 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:07.699 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:07.699 03:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.699 03:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.699 [2024-10-09 03:12:50.965036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:07.699 [2024-10-09 03:12:50.965086] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.699 [2024-10-09 03:12:50.965099] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:07.699 [2024-10-09 03:12:50.965109] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.699 [2024-10-09 03:12:50.965488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.699 [2024-10-09 03:12:50.965511] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:07.699 [2024-10-09 03:12:50.965566] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:07.699 [2024-10-09 03:12:50.965586] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:07.699 [2024-10-09 03:12:50.965713] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:07.699 [2024-10-09 03:12:50.965725] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:07.699 [2024-10-09 03:12:50.966018] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:07.699 [2024-10-09 03:12:50.966184] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:07.699 [2024-10-09 03:12:50.966193] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:07.699 [2024-10-09 03:12:50.966328] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:07.699 pt3 00:10:07.699 03:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.699 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:07.699 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:07.699 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:07.699 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:07.699 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:07.699 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:07.699 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.699 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.699 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.699 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.699 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.699 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.699 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.699 03:12:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:07.699 03:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:07.699 03:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.699 03:12:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:07.959 03:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.959 "name": "raid_bdev1", 00:10:07.959 "uuid": "64676770-e38f-472a-bd14-931bb39f164c", 00:10:07.959 "strip_size_kb": 64, 00:10:07.959 "state": "online", 00:10:07.959 "raid_level": "concat", 00:10:07.959 "superblock": true, 00:10:07.959 "num_base_bdevs": 3, 00:10:07.959 "num_base_bdevs_discovered": 3, 00:10:07.959 "num_base_bdevs_operational": 3, 00:10:07.959 "base_bdevs_list": [ 00:10:07.959 { 00:10:07.959 "name": "pt1", 00:10:07.959 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:07.959 "is_configured": true, 00:10:07.959 "data_offset": 2048, 00:10:07.959 "data_size": 63488 00:10:07.959 }, 00:10:07.959 { 00:10:07.959 "name": "pt2", 00:10:07.959 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:07.959 "is_configured": true, 00:10:07.959 "data_offset": 2048, 00:10:07.959 "data_size": 63488 00:10:07.959 }, 00:10:07.959 { 00:10:07.959 "name": "pt3", 00:10:07.959 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:07.959 "is_configured": true, 00:10:07.959 "data_offset": 2048, 00:10:07.959 "data_size": 63488 00:10:07.959 } 00:10:07.959 ] 00:10:07.959 }' 00:10:07.959 03:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.959 03:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.220 03:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:08.220 03:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:08.220 03:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:08.220 03:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:08.220 03:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:08.220 03:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:08.220 03:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:08.220 03:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:08.220 03:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.220 03:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.220 [2024-10-09 03:12:51.440614] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:08.220 03:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.220 03:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:08.220 "name": "raid_bdev1", 00:10:08.220 "aliases": [ 00:10:08.220 "64676770-e38f-472a-bd14-931bb39f164c" 00:10:08.220 ], 00:10:08.220 "product_name": "Raid Volume", 00:10:08.220 "block_size": 512, 00:10:08.220 "num_blocks": 190464, 00:10:08.220 "uuid": "64676770-e38f-472a-bd14-931bb39f164c", 00:10:08.220 "assigned_rate_limits": { 00:10:08.220 "rw_ios_per_sec": 0, 00:10:08.220 "rw_mbytes_per_sec": 0, 00:10:08.220 "r_mbytes_per_sec": 0, 00:10:08.220 "w_mbytes_per_sec": 0 00:10:08.220 }, 00:10:08.220 "claimed": false, 00:10:08.220 "zoned": false, 00:10:08.220 "supported_io_types": { 00:10:08.220 "read": true, 00:10:08.220 "write": true, 00:10:08.220 "unmap": true, 00:10:08.220 "flush": true, 00:10:08.220 "reset": true, 00:10:08.220 "nvme_admin": false, 00:10:08.220 "nvme_io": false, 00:10:08.220 "nvme_io_md": false, 00:10:08.220 "write_zeroes": true, 00:10:08.220 "zcopy": false, 00:10:08.220 "get_zone_info": false, 00:10:08.220 "zone_management": false, 00:10:08.220 "zone_append": false, 00:10:08.220 "compare": false, 00:10:08.220 "compare_and_write": false, 00:10:08.220 "abort": false, 00:10:08.220 "seek_hole": false, 00:10:08.220 "seek_data": false, 00:10:08.220 "copy": false, 00:10:08.220 "nvme_iov_md": false 00:10:08.220 }, 00:10:08.220 "memory_domains": [ 00:10:08.220 { 00:10:08.220 "dma_device_id": "system", 00:10:08.220 "dma_device_type": 1 00:10:08.220 }, 00:10:08.220 { 00:10:08.220 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.220 "dma_device_type": 2 00:10:08.220 }, 00:10:08.220 { 00:10:08.220 "dma_device_id": "system", 00:10:08.220 "dma_device_type": 1 00:10:08.220 }, 00:10:08.221 { 00:10:08.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.221 "dma_device_type": 2 00:10:08.221 }, 00:10:08.221 { 00:10:08.221 "dma_device_id": "system", 00:10:08.221 "dma_device_type": 1 00:10:08.221 }, 00:10:08.221 { 00:10:08.221 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.221 "dma_device_type": 2 00:10:08.221 } 00:10:08.221 ], 00:10:08.221 "driver_specific": { 00:10:08.221 "raid": { 00:10:08.221 "uuid": "64676770-e38f-472a-bd14-931bb39f164c", 00:10:08.221 "strip_size_kb": 64, 00:10:08.221 "state": "online", 00:10:08.221 "raid_level": "concat", 00:10:08.221 "superblock": true, 00:10:08.221 "num_base_bdevs": 3, 00:10:08.221 "num_base_bdevs_discovered": 3, 00:10:08.221 "num_base_bdevs_operational": 3, 00:10:08.221 "base_bdevs_list": [ 00:10:08.221 { 00:10:08.221 "name": "pt1", 00:10:08.221 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:08.221 "is_configured": true, 00:10:08.221 "data_offset": 2048, 00:10:08.221 "data_size": 63488 00:10:08.221 }, 00:10:08.221 { 00:10:08.221 "name": "pt2", 00:10:08.221 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:08.221 "is_configured": true, 00:10:08.221 "data_offset": 2048, 00:10:08.221 "data_size": 63488 00:10:08.221 }, 00:10:08.221 { 00:10:08.221 "name": "pt3", 00:10:08.221 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:08.221 "is_configured": true, 00:10:08.221 "data_offset": 2048, 00:10:08.221 "data_size": 63488 00:10:08.221 } 00:10:08.221 ] 00:10:08.221 } 00:10:08.221 } 00:10:08.221 }' 00:10:08.221 03:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:08.221 03:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:08.221 pt2 00:10:08.221 pt3' 00:10:08.221 03:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:08.481 [2024-10-09 03:12:51.700129] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 64676770-e38f-472a-bd14-931bb39f164c '!=' 64676770-e38f-472a-bd14-931bb39f164c ']' 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66973 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 66973 ']' 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 66973 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 66973 00:10:08.481 killing process with pid 66973 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 66973' 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 66973 00:10:08.481 [2024-10-09 03:12:51.782473] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:08.481 [2024-10-09 03:12:51.782567] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:08.481 [2024-10-09 03:12:51.782629] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:08.481 [2024-10-09 03:12:51.782642] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:08.481 03:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 66973 00:10:09.050 [2024-10-09 03:12:52.102430] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:10.431 03:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:10.431 00:10:10.431 real 0m5.487s 00:10:10.431 user 0m7.654s 00:10:10.431 sys 0m0.982s 00:10:10.431 ************************************ 00:10:10.431 END TEST raid_superblock_test 00:10:10.431 ************************************ 00:10:10.431 03:12:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:10.431 03:12:53 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.431 03:12:53 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:10:10.431 03:12:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:10.431 03:12:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:10.431 03:12:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:10.431 ************************************ 00:10:10.431 START TEST raid_read_error_test 00:10:10.431 ************************************ 00:10:10.431 03:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 read 00:10:10.431 03:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:10.431 03:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:10.431 03:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:10.431 03:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:10.431 03:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:10.431 03:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:10.431 03:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:10.431 03:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:10.431 03:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:10.431 03:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:10.431 03:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:10.431 03:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:10.431 03:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:10.431 03:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:10.431 03:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:10.431 03:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:10.431 03:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:10.431 03:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:10.431 03:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:10.431 03:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:10.431 03:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:10.431 03:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:10.431 03:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:10.431 03:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:10.431 03:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:10.431 03:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.2XalElSuyL 00:10:10.431 03:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67226 00:10:10.431 03:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67226 00:10:10.431 03:12:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:10.431 03:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 67226 ']' 00:10:10.431 03:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:10.431 03:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:10.431 03:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:10.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:10.431 03:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:10.431 03:12:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.432 [2024-10-09 03:12:53.609044] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:10:10.432 [2024-10-09 03:12:53.609235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67226 ] 00:10:10.691 [2024-10-09 03:12:53.773776] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:10.950 [2024-10-09 03:12:54.020963] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.210 [2024-10-09 03:12:54.255009] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:11.210 [2024-10-09 03:12:54.255049] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:11.210 03:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:11.210 03:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:11.210 03:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:11.210 03:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:11.210 03:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.210 03:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.210 BaseBdev1_malloc 00:10:11.210 03:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.211 03:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:11.211 03:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.211 03:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.211 true 00:10:11.211 03:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.211 03:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:11.211 03:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.211 03:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.211 [2024-10-09 03:12:54.511926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:11.211 [2024-10-09 03:12:54.511993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.211 [2024-10-09 03:12:54.512011] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:11.211 [2024-10-09 03:12:54.512023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.471 [2024-10-09 03:12:54.514337] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.471 [2024-10-09 03:12:54.514378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:11.471 BaseBdev1 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.471 BaseBdev2_malloc 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.471 true 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.471 [2024-10-09 03:12:54.594405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:11.471 [2024-10-09 03:12:54.594465] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.471 [2024-10-09 03:12:54.594482] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:11.471 [2024-10-09 03:12:54.594494] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.471 [2024-10-09 03:12:54.596819] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.471 [2024-10-09 03:12:54.596876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:11.471 BaseBdev2 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.471 BaseBdev3_malloc 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.471 true 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.471 [2024-10-09 03:12:54.666935] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:11.471 [2024-10-09 03:12:54.666987] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:11.471 [2024-10-09 03:12:54.667020] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:11.471 [2024-10-09 03:12:54.667031] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:11.471 [2024-10-09 03:12:54.669364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:11.471 [2024-10-09 03:12:54.669405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:11.471 BaseBdev3 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.471 [2024-10-09 03:12:54.679015] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:11.471 [2024-10-09 03:12:54.681119] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:11.471 [2024-10-09 03:12:54.681195] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:11.471 [2024-10-09 03:12:54.681390] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:11.471 [2024-10-09 03:12:54.681402] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:11.471 [2024-10-09 03:12:54.681662] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:11.471 [2024-10-09 03:12:54.681816] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:11.471 [2024-10-09 03:12:54.681828] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:11.471 [2024-10-09 03:12:54.681993] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.471 "name": "raid_bdev1", 00:10:11.471 "uuid": "30d52159-2604-4927-a83f-0a5a2bd79c8a", 00:10:11.471 "strip_size_kb": 64, 00:10:11.471 "state": "online", 00:10:11.471 "raid_level": "concat", 00:10:11.471 "superblock": true, 00:10:11.471 "num_base_bdevs": 3, 00:10:11.471 "num_base_bdevs_discovered": 3, 00:10:11.471 "num_base_bdevs_operational": 3, 00:10:11.471 "base_bdevs_list": [ 00:10:11.471 { 00:10:11.471 "name": "BaseBdev1", 00:10:11.471 "uuid": "5b3d2aae-c5df-5be9-b503-ecb487e21d57", 00:10:11.471 "is_configured": true, 00:10:11.471 "data_offset": 2048, 00:10:11.471 "data_size": 63488 00:10:11.471 }, 00:10:11.471 { 00:10:11.471 "name": "BaseBdev2", 00:10:11.471 "uuid": "194af9dc-2135-5653-ac37-0389812033cb", 00:10:11.471 "is_configured": true, 00:10:11.471 "data_offset": 2048, 00:10:11.471 "data_size": 63488 00:10:11.471 }, 00:10:11.471 { 00:10:11.471 "name": "BaseBdev3", 00:10:11.471 "uuid": "711e50d0-e122-5fd5-9b22-6aa8ecdc4ece", 00:10:11.471 "is_configured": true, 00:10:11.471 "data_offset": 2048, 00:10:11.471 "data_size": 63488 00:10:11.471 } 00:10:11.471 ] 00:10:11.471 }' 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.471 03:12:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:12.041 03:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:12.041 03:12:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:12.041 [2024-10-09 03:12:55.215371] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:13.042 03:12:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:13.042 03:12:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.042 03:12:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.043 03:12:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.043 03:12:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:13.043 03:12:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:13.043 03:12:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:13.043 03:12:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:13.043 03:12:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:13.043 03:12:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:13.043 03:12:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:13.043 03:12:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.043 03:12:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.043 03:12:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.043 03:12:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.043 03:12:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.043 03:12:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.043 03:12:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.043 03:12:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:13.043 03:12:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.043 03:12:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.043 03:12:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.043 03:12:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.043 "name": "raid_bdev1", 00:10:13.043 "uuid": "30d52159-2604-4927-a83f-0a5a2bd79c8a", 00:10:13.043 "strip_size_kb": 64, 00:10:13.043 "state": "online", 00:10:13.043 "raid_level": "concat", 00:10:13.043 "superblock": true, 00:10:13.043 "num_base_bdevs": 3, 00:10:13.043 "num_base_bdevs_discovered": 3, 00:10:13.043 "num_base_bdevs_operational": 3, 00:10:13.043 "base_bdevs_list": [ 00:10:13.043 { 00:10:13.043 "name": "BaseBdev1", 00:10:13.043 "uuid": "5b3d2aae-c5df-5be9-b503-ecb487e21d57", 00:10:13.043 "is_configured": true, 00:10:13.043 "data_offset": 2048, 00:10:13.043 "data_size": 63488 00:10:13.043 }, 00:10:13.043 { 00:10:13.043 "name": "BaseBdev2", 00:10:13.043 "uuid": "194af9dc-2135-5653-ac37-0389812033cb", 00:10:13.043 "is_configured": true, 00:10:13.043 "data_offset": 2048, 00:10:13.043 "data_size": 63488 00:10:13.043 }, 00:10:13.043 { 00:10:13.043 "name": "BaseBdev3", 00:10:13.043 "uuid": "711e50d0-e122-5fd5-9b22-6aa8ecdc4ece", 00:10:13.043 "is_configured": true, 00:10:13.043 "data_offset": 2048, 00:10:13.043 "data_size": 63488 00:10:13.043 } 00:10:13.043 ] 00:10:13.043 }' 00:10:13.043 03:12:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.043 03:12:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.302 03:12:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:13.302 03:12:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:13.302 03:12:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:13.302 [2024-10-09 03:12:56.563756] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:13.302 [2024-10-09 03:12:56.563881] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:13.302 [2024-10-09 03:12:56.566560] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:13.302 [2024-10-09 03:12:56.566616] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:13.302 [2024-10-09 03:12:56.566658] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:13.302 [2024-10-09 03:12:56.566668] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:13.302 { 00:10:13.302 "results": [ 00:10:13.302 { 00:10:13.302 "job": "raid_bdev1", 00:10:13.302 "core_mask": "0x1", 00:10:13.302 "workload": "randrw", 00:10:13.302 "percentage": 50, 00:10:13.302 "status": "finished", 00:10:13.302 "queue_depth": 1, 00:10:13.302 "io_size": 131072, 00:10:13.302 "runtime": 1.349054, 00:10:13.302 "iops": 13924.572329943798, 00:10:13.302 "mibps": 1740.5715412429747, 00:10:13.302 "io_failed": 1, 00:10:13.302 "io_timeout": 0, 00:10:13.302 "avg_latency_us": 101.13347364036305, 00:10:13.302 "min_latency_us": 25.2646288209607, 00:10:13.302 "max_latency_us": 1402.2986899563318 00:10:13.302 } 00:10:13.302 ], 00:10:13.302 "core_count": 1 00:10:13.302 } 00:10:13.302 03:12:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:13.303 03:12:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67226 00:10:13.303 03:12:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 67226 ']' 00:10:13.303 03:12:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 67226 00:10:13.303 03:12:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:13.303 03:12:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:13.303 03:12:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67226 00:10:13.562 03:12:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:13.562 03:12:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:13.562 03:12:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67226' 00:10:13.562 killing process with pid 67226 00:10:13.562 03:12:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 67226 00:10:13.562 [2024-10-09 03:12:56.609760] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:13.562 03:12:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 67226 00:10:13.562 [2024-10-09 03:12:56.862469] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:15.470 03:12:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.2XalElSuyL 00:10:15.470 03:12:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:15.470 03:12:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:15.470 03:12:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:10:15.470 03:12:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:15.470 03:12:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:15.470 03:12:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:15.470 03:12:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:10:15.470 ************************************ 00:10:15.470 END TEST raid_read_error_test 00:10:15.470 ************************************ 00:10:15.470 00:10:15.470 real 0m4.808s 00:10:15.470 user 0m5.489s 00:10:15.470 sys 0m0.699s 00:10:15.470 03:12:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:15.470 03:12:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.470 03:12:58 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:10:15.470 03:12:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:15.470 03:12:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:15.470 03:12:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:15.470 ************************************ 00:10:15.470 START TEST raid_write_error_test 00:10:15.470 ************************************ 00:10:15.470 03:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 3 write 00:10:15.470 03:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:10:15.470 03:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:15.470 03:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:15.470 03:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:15.470 03:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:15.470 03:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:15.470 03:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:15.470 03:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:15.470 03:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:15.470 03:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:15.470 03:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:15.470 03:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:15.470 03:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:15.470 03:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:15.470 03:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:15.470 03:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:15.470 03:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:15.470 03:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:15.470 03:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:15.470 03:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:15.470 03:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:15.470 03:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:10:15.470 03:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:15.470 03:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:15.470 03:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:15.470 03:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.PW89kcJd2D 00:10:15.470 03:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67372 00:10:15.470 03:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:15.470 03:12:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67372 00:10:15.470 03:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 67372 ']' 00:10:15.470 03:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.470 03:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:15.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.470 03:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.470 03:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:15.470 03:12:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.470 [2024-10-09 03:12:58.491505] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:10:15.470 [2024-10-09 03:12:58.491619] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67372 ] 00:10:15.470 [2024-10-09 03:12:58.654242] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.730 [2024-10-09 03:12:58.917008] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.989 [2024-10-09 03:12:59.153011] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:15.989 [2024-10-09 03:12:59.153061] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:16.249 03:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:16.249 03:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.250 BaseBdev1_malloc 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.250 true 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.250 [2024-10-09 03:12:59.376619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:16.250 [2024-10-09 03:12:59.376687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.250 [2024-10-09 03:12:59.376707] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:16.250 [2024-10-09 03:12:59.376718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.250 [2024-10-09 03:12:59.379131] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.250 [2024-10-09 03:12:59.379175] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:16.250 BaseBdev1 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.250 BaseBdev2_malloc 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.250 true 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.250 [2024-10-09 03:12:59.460497] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:16.250 [2024-10-09 03:12:59.460567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.250 [2024-10-09 03:12:59.460585] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:16.250 [2024-10-09 03:12:59.460596] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.250 [2024-10-09 03:12:59.462999] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.250 [2024-10-09 03:12:59.463110] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:16.250 BaseBdev2 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.250 BaseBdev3_malloc 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.250 true 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.250 [2024-10-09 03:12:59.535469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:16.250 [2024-10-09 03:12:59.535547] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:16.250 [2024-10-09 03:12:59.535568] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:16.250 [2024-10-09 03:12:59.535582] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:16.250 [2024-10-09 03:12:59.538304] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:16.250 [2024-10-09 03:12:59.538351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:16.250 BaseBdev3 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.250 03:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.250 [2024-10-09 03:12:59.547539] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:16.250 [2024-10-09 03:12:59.549878] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:16.250 [2024-10-09 03:12:59.549973] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:16.250 [2024-10-09 03:12:59.550213] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:16.250 [2024-10-09 03:12:59.550231] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:16.250 [2024-10-09 03:12:59.550518] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:16.250 [2024-10-09 03:12:59.550684] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:16.250 [2024-10-09 03:12:59.550697] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:16.250 [2024-10-09 03:12:59.550875] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:16.510 03:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.510 03:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:16.510 03:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:16.510 03:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:16.510 03:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:16.510 03:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:16.510 03:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:16.510 03:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:16.510 03:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:16.510 03:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:16.510 03:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:16.510 03:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:16.510 03:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.510 03:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.510 03:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.510 03:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.510 03:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:16.510 "name": "raid_bdev1", 00:10:16.510 "uuid": "b295a50b-7dff-4f3e-9f0f-97b10d204606", 00:10:16.510 "strip_size_kb": 64, 00:10:16.510 "state": "online", 00:10:16.510 "raid_level": "concat", 00:10:16.510 "superblock": true, 00:10:16.510 "num_base_bdevs": 3, 00:10:16.510 "num_base_bdevs_discovered": 3, 00:10:16.510 "num_base_bdevs_operational": 3, 00:10:16.510 "base_bdevs_list": [ 00:10:16.510 { 00:10:16.510 "name": "BaseBdev1", 00:10:16.510 "uuid": "0a6473f2-8276-589f-a61c-001ac022835a", 00:10:16.510 "is_configured": true, 00:10:16.510 "data_offset": 2048, 00:10:16.510 "data_size": 63488 00:10:16.510 }, 00:10:16.510 { 00:10:16.510 "name": "BaseBdev2", 00:10:16.510 "uuid": "0919588d-9e1c-5367-9c22-602200b61cad", 00:10:16.510 "is_configured": true, 00:10:16.510 "data_offset": 2048, 00:10:16.510 "data_size": 63488 00:10:16.510 }, 00:10:16.510 { 00:10:16.510 "name": "BaseBdev3", 00:10:16.510 "uuid": "47e49a38-3235-5475-8781-c10407475a0c", 00:10:16.510 "is_configured": true, 00:10:16.510 "data_offset": 2048, 00:10:16.510 "data_size": 63488 00:10:16.510 } 00:10:16.510 ] 00:10:16.510 }' 00:10:16.510 03:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:16.510 03:12:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:16.770 03:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:16.770 03:12:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:17.030 [2024-10-09 03:13:00.088188] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:17.970 03:13:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:17.970 03:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.970 03:13:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.970 03:13:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.970 03:13:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:17.970 03:13:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:10:17.970 03:13:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:17.970 03:13:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:17.970 03:13:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:17.970 03:13:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:17.970 03:13:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:17.970 03:13:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:17.970 03:13:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:17.970 03:13:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:17.970 03:13:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:17.970 03:13:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:17.970 03:13:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:17.970 03:13:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:17.970 03:13:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:17.970 03:13:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:17.970 03:13:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.970 03:13:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:17.970 03:13:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:17.970 "name": "raid_bdev1", 00:10:17.970 "uuid": "b295a50b-7dff-4f3e-9f0f-97b10d204606", 00:10:17.970 "strip_size_kb": 64, 00:10:17.970 "state": "online", 00:10:17.970 "raid_level": "concat", 00:10:17.970 "superblock": true, 00:10:17.970 "num_base_bdevs": 3, 00:10:17.970 "num_base_bdevs_discovered": 3, 00:10:17.970 "num_base_bdevs_operational": 3, 00:10:17.970 "base_bdevs_list": [ 00:10:17.970 { 00:10:17.970 "name": "BaseBdev1", 00:10:17.970 "uuid": "0a6473f2-8276-589f-a61c-001ac022835a", 00:10:17.970 "is_configured": true, 00:10:17.970 "data_offset": 2048, 00:10:17.970 "data_size": 63488 00:10:17.970 }, 00:10:17.970 { 00:10:17.970 "name": "BaseBdev2", 00:10:17.970 "uuid": "0919588d-9e1c-5367-9c22-602200b61cad", 00:10:17.970 "is_configured": true, 00:10:17.970 "data_offset": 2048, 00:10:17.970 "data_size": 63488 00:10:17.970 }, 00:10:17.970 { 00:10:17.970 "name": "BaseBdev3", 00:10:17.970 "uuid": "47e49a38-3235-5475-8781-c10407475a0c", 00:10:17.970 "is_configured": true, 00:10:17.970 "data_offset": 2048, 00:10:17.970 "data_size": 63488 00:10:17.970 } 00:10:17.970 ] 00:10:17.970 }' 00:10:17.970 03:13:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:17.970 03:13:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.230 03:13:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:18.230 03:13:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:18.230 03:13:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.230 [2024-10-09 03:13:01.437189] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:18.230 [2024-10-09 03:13:01.437322] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:18.230 [2024-10-09 03:13:01.439935] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:18.230 [2024-10-09 03:13:01.440040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:18.230 [2024-10-09 03:13:01.440102] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:18.230 [2024-10-09 03:13:01.440141] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:18.230 { 00:10:18.230 "results": [ 00:10:18.230 { 00:10:18.230 "job": "raid_bdev1", 00:10:18.230 "core_mask": "0x1", 00:10:18.230 "workload": "randrw", 00:10:18.230 "percentage": 50, 00:10:18.230 "status": "finished", 00:10:18.230 "queue_depth": 1, 00:10:18.230 "io_size": 131072, 00:10:18.230 "runtime": 1.349494, 00:10:18.230 "iops": 14018.587707688956, 00:10:18.230 "mibps": 1752.3234634611194, 00:10:18.230 "io_failed": 1, 00:10:18.230 "io_timeout": 0, 00:10:18.230 "avg_latency_us": 100.41515179283043, 00:10:18.230 "min_latency_us": 25.823580786026202, 00:10:18.230 "max_latency_us": 1387.989519650655 00:10:18.230 } 00:10:18.230 ], 00:10:18.230 "core_count": 1 00:10:18.230 } 00:10:18.230 03:13:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:18.230 03:13:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67372 00:10:18.230 03:13:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 67372 ']' 00:10:18.230 03:13:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 67372 00:10:18.230 03:13:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:18.230 03:13:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:18.230 03:13:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67372 00:10:18.230 03:13:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:18.230 03:13:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:18.230 killing process with pid 67372 00:10:18.230 03:13:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67372' 00:10:18.230 03:13:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 67372 00:10:18.230 [2024-10-09 03:13:01.483857] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:18.230 03:13:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 67372 00:10:18.489 [2024-10-09 03:13:01.736153] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:20.397 03:13:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:20.397 03:13:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.PW89kcJd2D 00:10:20.397 03:13:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:20.397 03:13:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:10:20.397 03:13:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:10:20.397 03:13:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:20.397 03:13:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:20.397 03:13:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:10:20.397 00:10:20.397 real 0m4.808s 00:10:20.397 user 0m5.507s 00:10:20.397 sys 0m0.672s 00:10:20.397 ************************************ 00:10:20.397 END TEST raid_write_error_test 00:10:20.397 ************************************ 00:10:20.397 03:13:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:20.397 03:13:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.397 03:13:03 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:20.397 03:13:03 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:10:20.397 03:13:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:20.397 03:13:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:20.397 03:13:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:20.397 ************************************ 00:10:20.397 START TEST raid_state_function_test 00:10:20.397 ************************************ 00:10:20.397 03:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 false 00:10:20.397 03:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:20.397 03:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:20.397 03:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:20.397 03:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:20.397 03:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:20.397 03:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:20.397 03:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:20.397 03:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:20.397 03:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:20.397 03:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:20.397 03:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:20.397 03:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:20.397 03:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:20.397 03:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:20.397 03:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:20.397 03:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:20.397 03:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:20.397 03:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:20.397 03:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:20.397 03:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:20.397 03:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:20.397 03:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:20.397 03:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:20.397 03:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:20.397 03:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:20.397 03:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67521 00:10:20.397 03:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:20.397 03:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67521' 00:10:20.397 Process raid pid: 67521 00:10:20.397 03:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67521 00:10:20.397 03:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 67521 ']' 00:10:20.397 03:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:20.397 03:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:20.397 03:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:20.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:20.397 03:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:20.397 03:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.397 [2024-10-09 03:13:03.359566] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:10:20.397 [2024-10-09 03:13:03.359752] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:20.397 [2024-10-09 03:13:03.526983] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:20.657 [2024-10-09 03:13:03.783431] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:20.916 [2024-10-09 03:13:04.032502] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.916 [2024-10-09 03:13:04.032639] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:20.916 03:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:20.916 03:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:10:20.916 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:20.916 03:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:20.916 03:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.176 [2024-10-09 03:13:04.222030] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:21.176 [2024-10-09 03:13:04.222103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:21.176 [2024-10-09 03:13:04.222116] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:21.176 [2024-10-09 03:13:04.222128] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:21.176 [2024-10-09 03:13:04.222134] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:21.176 [2024-10-09 03:13:04.222143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:21.176 03:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.176 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:21.176 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.176 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.176 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.176 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.176 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.176 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.176 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.176 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.176 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.176 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.176 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.176 03:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.176 03:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.176 03:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.176 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.176 "name": "Existed_Raid", 00:10:21.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.176 "strip_size_kb": 0, 00:10:21.176 "state": "configuring", 00:10:21.176 "raid_level": "raid1", 00:10:21.176 "superblock": false, 00:10:21.176 "num_base_bdevs": 3, 00:10:21.176 "num_base_bdevs_discovered": 0, 00:10:21.176 "num_base_bdevs_operational": 3, 00:10:21.176 "base_bdevs_list": [ 00:10:21.176 { 00:10:21.176 "name": "BaseBdev1", 00:10:21.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.176 "is_configured": false, 00:10:21.176 "data_offset": 0, 00:10:21.176 "data_size": 0 00:10:21.176 }, 00:10:21.176 { 00:10:21.176 "name": "BaseBdev2", 00:10:21.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.176 "is_configured": false, 00:10:21.176 "data_offset": 0, 00:10:21.176 "data_size": 0 00:10:21.176 }, 00:10:21.176 { 00:10:21.176 "name": "BaseBdev3", 00:10:21.176 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.176 "is_configured": false, 00:10:21.176 "data_offset": 0, 00:10:21.176 "data_size": 0 00:10:21.176 } 00:10:21.176 ] 00:10:21.176 }' 00:10:21.176 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.176 03:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.436 [2024-10-09 03:13:04.605255] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:21.436 [2024-10-09 03:13:04.605296] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.436 [2024-10-09 03:13:04.617265] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:21.436 [2024-10-09 03:13:04.617347] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:21.436 [2024-10-09 03:13:04.617375] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:21.436 [2024-10-09 03:13:04.617398] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:21.436 [2024-10-09 03:13:04.617415] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:21.436 [2024-10-09 03:13:04.617437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.436 [2024-10-09 03:13:04.684265] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:21.436 BaseBdev1 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.436 [ 00:10:21.436 { 00:10:21.436 "name": "BaseBdev1", 00:10:21.436 "aliases": [ 00:10:21.436 "0d3ef5ba-357e-44f7-851f-9ab386e983ca" 00:10:21.436 ], 00:10:21.436 "product_name": "Malloc disk", 00:10:21.436 "block_size": 512, 00:10:21.436 "num_blocks": 65536, 00:10:21.436 "uuid": "0d3ef5ba-357e-44f7-851f-9ab386e983ca", 00:10:21.436 "assigned_rate_limits": { 00:10:21.436 "rw_ios_per_sec": 0, 00:10:21.436 "rw_mbytes_per_sec": 0, 00:10:21.436 "r_mbytes_per_sec": 0, 00:10:21.436 "w_mbytes_per_sec": 0 00:10:21.436 }, 00:10:21.436 "claimed": true, 00:10:21.436 "claim_type": "exclusive_write", 00:10:21.436 "zoned": false, 00:10:21.436 "supported_io_types": { 00:10:21.436 "read": true, 00:10:21.436 "write": true, 00:10:21.436 "unmap": true, 00:10:21.436 "flush": true, 00:10:21.436 "reset": true, 00:10:21.436 "nvme_admin": false, 00:10:21.436 "nvme_io": false, 00:10:21.436 "nvme_io_md": false, 00:10:21.436 "write_zeroes": true, 00:10:21.436 "zcopy": true, 00:10:21.436 "get_zone_info": false, 00:10:21.436 "zone_management": false, 00:10:21.436 "zone_append": false, 00:10:21.436 "compare": false, 00:10:21.436 "compare_and_write": false, 00:10:21.436 "abort": true, 00:10:21.436 "seek_hole": false, 00:10:21.436 "seek_data": false, 00:10:21.436 "copy": true, 00:10:21.436 "nvme_iov_md": false 00:10:21.436 }, 00:10:21.436 "memory_domains": [ 00:10:21.436 { 00:10:21.436 "dma_device_id": "system", 00:10:21.436 "dma_device_type": 1 00:10:21.436 }, 00:10:21.436 { 00:10:21.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.436 "dma_device_type": 2 00:10:21.436 } 00:10:21.436 ], 00:10:21.436 "driver_specific": {} 00:10:21.436 } 00:10:21.436 ] 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.436 03:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.696 03:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.696 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.696 "name": "Existed_Raid", 00:10:21.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.696 "strip_size_kb": 0, 00:10:21.696 "state": "configuring", 00:10:21.696 "raid_level": "raid1", 00:10:21.696 "superblock": false, 00:10:21.696 "num_base_bdevs": 3, 00:10:21.696 "num_base_bdevs_discovered": 1, 00:10:21.696 "num_base_bdevs_operational": 3, 00:10:21.696 "base_bdevs_list": [ 00:10:21.696 { 00:10:21.696 "name": "BaseBdev1", 00:10:21.696 "uuid": "0d3ef5ba-357e-44f7-851f-9ab386e983ca", 00:10:21.696 "is_configured": true, 00:10:21.696 "data_offset": 0, 00:10:21.696 "data_size": 65536 00:10:21.696 }, 00:10:21.696 { 00:10:21.696 "name": "BaseBdev2", 00:10:21.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.696 "is_configured": false, 00:10:21.696 "data_offset": 0, 00:10:21.696 "data_size": 0 00:10:21.696 }, 00:10:21.696 { 00:10:21.696 "name": "BaseBdev3", 00:10:21.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.696 "is_configured": false, 00:10:21.696 "data_offset": 0, 00:10:21.696 "data_size": 0 00:10:21.696 } 00:10:21.696 ] 00:10:21.696 }' 00:10:21.696 03:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.696 03:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.956 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:21.956 03:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.956 03:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.956 [2024-10-09 03:13:05.191407] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:21.956 [2024-10-09 03:13:05.191449] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:21.956 03:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.956 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:21.956 03:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.956 03:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.956 [2024-10-09 03:13:05.199442] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:21.956 [2024-10-09 03:13:05.201544] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:21.956 [2024-10-09 03:13:05.201585] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:21.956 [2024-10-09 03:13:05.201595] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:21.956 [2024-10-09 03:13:05.201604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:21.956 03:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.956 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:21.956 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:21.956 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:21.956 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:21.956 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:21.956 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:21.956 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:21.956 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:21.956 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:21.956 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:21.956 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:21.956 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:21.956 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:21.956 03:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:21.956 03:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.956 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:21.956 03:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:21.956 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:21.956 "name": "Existed_Raid", 00:10:21.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.956 "strip_size_kb": 0, 00:10:21.956 "state": "configuring", 00:10:21.956 "raid_level": "raid1", 00:10:21.956 "superblock": false, 00:10:21.956 "num_base_bdevs": 3, 00:10:21.956 "num_base_bdevs_discovered": 1, 00:10:21.956 "num_base_bdevs_operational": 3, 00:10:21.956 "base_bdevs_list": [ 00:10:21.956 { 00:10:21.956 "name": "BaseBdev1", 00:10:21.956 "uuid": "0d3ef5ba-357e-44f7-851f-9ab386e983ca", 00:10:21.956 "is_configured": true, 00:10:21.956 "data_offset": 0, 00:10:21.956 "data_size": 65536 00:10:21.956 }, 00:10:21.956 { 00:10:21.956 "name": "BaseBdev2", 00:10:21.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.956 "is_configured": false, 00:10:21.956 "data_offset": 0, 00:10:21.956 "data_size": 0 00:10:21.956 }, 00:10:21.956 { 00:10:21.956 "name": "BaseBdev3", 00:10:21.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:21.956 "is_configured": false, 00:10:21.956 "data_offset": 0, 00:10:21.956 "data_size": 0 00:10:21.956 } 00:10:21.956 ] 00:10:21.956 }' 00:10:21.956 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:21.956 03:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.526 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:22.526 03:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.526 03:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.526 [2024-10-09 03:13:05.636983] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:22.526 BaseBdev2 00:10:22.526 03:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.526 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:22.526 03:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:22.526 03:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:22.526 03:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:22.526 03:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:22.526 03:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:22.526 03:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:22.526 03:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.526 03:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.526 03:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.526 03:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:22.526 03:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.526 03:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.526 [ 00:10:22.526 { 00:10:22.526 "name": "BaseBdev2", 00:10:22.526 "aliases": [ 00:10:22.526 "6bed63c6-bfeb-4a92-87ff-0097d3067a43" 00:10:22.526 ], 00:10:22.526 "product_name": "Malloc disk", 00:10:22.526 "block_size": 512, 00:10:22.526 "num_blocks": 65536, 00:10:22.526 "uuid": "6bed63c6-bfeb-4a92-87ff-0097d3067a43", 00:10:22.526 "assigned_rate_limits": { 00:10:22.526 "rw_ios_per_sec": 0, 00:10:22.526 "rw_mbytes_per_sec": 0, 00:10:22.526 "r_mbytes_per_sec": 0, 00:10:22.526 "w_mbytes_per_sec": 0 00:10:22.526 }, 00:10:22.526 "claimed": true, 00:10:22.526 "claim_type": "exclusive_write", 00:10:22.526 "zoned": false, 00:10:22.526 "supported_io_types": { 00:10:22.526 "read": true, 00:10:22.526 "write": true, 00:10:22.526 "unmap": true, 00:10:22.526 "flush": true, 00:10:22.526 "reset": true, 00:10:22.526 "nvme_admin": false, 00:10:22.526 "nvme_io": false, 00:10:22.526 "nvme_io_md": false, 00:10:22.526 "write_zeroes": true, 00:10:22.526 "zcopy": true, 00:10:22.526 "get_zone_info": false, 00:10:22.526 "zone_management": false, 00:10:22.526 "zone_append": false, 00:10:22.526 "compare": false, 00:10:22.526 "compare_and_write": false, 00:10:22.526 "abort": true, 00:10:22.526 "seek_hole": false, 00:10:22.526 "seek_data": false, 00:10:22.526 "copy": true, 00:10:22.526 "nvme_iov_md": false 00:10:22.526 }, 00:10:22.527 "memory_domains": [ 00:10:22.527 { 00:10:22.527 "dma_device_id": "system", 00:10:22.527 "dma_device_type": 1 00:10:22.527 }, 00:10:22.527 { 00:10:22.527 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:22.527 "dma_device_type": 2 00:10:22.527 } 00:10:22.527 ], 00:10:22.527 "driver_specific": {} 00:10:22.527 } 00:10:22.527 ] 00:10:22.527 03:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.527 03:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:22.527 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:22.527 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:22.527 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:22.527 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:22.527 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:22.527 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:22.527 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:22.527 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:22.527 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:22.527 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:22.527 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:22.527 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:22.527 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:22.527 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:22.527 03:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.527 03:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.527 03:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.527 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:22.527 "name": "Existed_Raid", 00:10:22.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.527 "strip_size_kb": 0, 00:10:22.527 "state": "configuring", 00:10:22.527 "raid_level": "raid1", 00:10:22.527 "superblock": false, 00:10:22.527 "num_base_bdevs": 3, 00:10:22.527 "num_base_bdevs_discovered": 2, 00:10:22.527 "num_base_bdevs_operational": 3, 00:10:22.527 "base_bdevs_list": [ 00:10:22.527 { 00:10:22.527 "name": "BaseBdev1", 00:10:22.527 "uuid": "0d3ef5ba-357e-44f7-851f-9ab386e983ca", 00:10:22.527 "is_configured": true, 00:10:22.527 "data_offset": 0, 00:10:22.527 "data_size": 65536 00:10:22.527 }, 00:10:22.527 { 00:10:22.527 "name": "BaseBdev2", 00:10:22.527 "uuid": "6bed63c6-bfeb-4a92-87ff-0097d3067a43", 00:10:22.527 "is_configured": true, 00:10:22.527 "data_offset": 0, 00:10:22.527 "data_size": 65536 00:10:22.527 }, 00:10:22.527 { 00:10:22.527 "name": "BaseBdev3", 00:10:22.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:22.527 "is_configured": false, 00:10:22.527 "data_offset": 0, 00:10:22.527 "data_size": 0 00:10:22.527 } 00:10:22.527 ] 00:10:22.527 }' 00:10:22.527 03:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:22.527 03:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:22.837 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:22.837 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.837 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.111 [2024-10-09 03:13:06.125755] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:23.111 [2024-10-09 03:13:06.125929] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:23.111 [2024-10-09 03:13:06.125966] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:23.111 [2024-10-09 03:13:06.126310] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:23.111 [2024-10-09 03:13:06.126546] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:23.111 [2024-10-09 03:13:06.126587] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:23.111 [2024-10-09 03:13:06.126923] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:23.111 BaseBdev3 00:10:23.111 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.111 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:23.111 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:23.111 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:23.111 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:23.111 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:23.111 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:23.111 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:23.111 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.111 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.111 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.111 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:23.111 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.111 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.111 [ 00:10:23.111 { 00:10:23.111 "name": "BaseBdev3", 00:10:23.111 "aliases": [ 00:10:23.111 "6f0866a5-0ec1-4182-a07f-8d3b6faef9a5" 00:10:23.111 ], 00:10:23.111 "product_name": "Malloc disk", 00:10:23.111 "block_size": 512, 00:10:23.111 "num_blocks": 65536, 00:10:23.111 "uuid": "6f0866a5-0ec1-4182-a07f-8d3b6faef9a5", 00:10:23.111 "assigned_rate_limits": { 00:10:23.111 "rw_ios_per_sec": 0, 00:10:23.111 "rw_mbytes_per_sec": 0, 00:10:23.111 "r_mbytes_per_sec": 0, 00:10:23.111 "w_mbytes_per_sec": 0 00:10:23.111 }, 00:10:23.111 "claimed": true, 00:10:23.111 "claim_type": "exclusive_write", 00:10:23.111 "zoned": false, 00:10:23.111 "supported_io_types": { 00:10:23.111 "read": true, 00:10:23.111 "write": true, 00:10:23.111 "unmap": true, 00:10:23.111 "flush": true, 00:10:23.111 "reset": true, 00:10:23.111 "nvme_admin": false, 00:10:23.111 "nvme_io": false, 00:10:23.111 "nvme_io_md": false, 00:10:23.111 "write_zeroes": true, 00:10:23.111 "zcopy": true, 00:10:23.111 "get_zone_info": false, 00:10:23.111 "zone_management": false, 00:10:23.111 "zone_append": false, 00:10:23.111 "compare": false, 00:10:23.111 "compare_and_write": false, 00:10:23.111 "abort": true, 00:10:23.111 "seek_hole": false, 00:10:23.111 "seek_data": false, 00:10:23.111 "copy": true, 00:10:23.111 "nvme_iov_md": false 00:10:23.111 }, 00:10:23.111 "memory_domains": [ 00:10:23.111 { 00:10:23.111 "dma_device_id": "system", 00:10:23.111 "dma_device_type": 1 00:10:23.111 }, 00:10:23.111 { 00:10:23.111 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.111 "dma_device_type": 2 00:10:23.111 } 00:10:23.111 ], 00:10:23.111 "driver_specific": {} 00:10:23.111 } 00:10:23.111 ] 00:10:23.111 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.111 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:23.111 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:23.111 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:23.111 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:23.111 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.111 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:23.111 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.111 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.111 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:23.111 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.111 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.111 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.111 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.111 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.111 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.111 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.111 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.111 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.111 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.111 "name": "Existed_Raid", 00:10:23.111 "uuid": "403596ea-7568-494d-b7e7-aef182f59bc4", 00:10:23.111 "strip_size_kb": 0, 00:10:23.111 "state": "online", 00:10:23.111 "raid_level": "raid1", 00:10:23.111 "superblock": false, 00:10:23.111 "num_base_bdevs": 3, 00:10:23.111 "num_base_bdevs_discovered": 3, 00:10:23.111 "num_base_bdevs_operational": 3, 00:10:23.111 "base_bdevs_list": [ 00:10:23.111 { 00:10:23.111 "name": "BaseBdev1", 00:10:23.111 "uuid": "0d3ef5ba-357e-44f7-851f-9ab386e983ca", 00:10:23.111 "is_configured": true, 00:10:23.111 "data_offset": 0, 00:10:23.111 "data_size": 65536 00:10:23.111 }, 00:10:23.111 { 00:10:23.111 "name": "BaseBdev2", 00:10:23.111 "uuid": "6bed63c6-bfeb-4a92-87ff-0097d3067a43", 00:10:23.111 "is_configured": true, 00:10:23.111 "data_offset": 0, 00:10:23.111 "data_size": 65536 00:10:23.111 }, 00:10:23.111 { 00:10:23.111 "name": "BaseBdev3", 00:10:23.111 "uuid": "6f0866a5-0ec1-4182-a07f-8d3b6faef9a5", 00:10:23.111 "is_configured": true, 00:10:23.111 "data_offset": 0, 00:10:23.111 "data_size": 65536 00:10:23.111 } 00:10:23.111 ] 00:10:23.111 }' 00:10:23.111 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.111 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.371 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:23.371 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:23.371 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:23.371 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:23.371 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:23.371 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:23.371 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:23.371 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:23.371 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.371 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.371 [2024-10-09 03:13:06.609344] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:23.371 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.371 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:23.371 "name": "Existed_Raid", 00:10:23.371 "aliases": [ 00:10:23.371 "403596ea-7568-494d-b7e7-aef182f59bc4" 00:10:23.371 ], 00:10:23.371 "product_name": "Raid Volume", 00:10:23.371 "block_size": 512, 00:10:23.371 "num_blocks": 65536, 00:10:23.371 "uuid": "403596ea-7568-494d-b7e7-aef182f59bc4", 00:10:23.371 "assigned_rate_limits": { 00:10:23.371 "rw_ios_per_sec": 0, 00:10:23.371 "rw_mbytes_per_sec": 0, 00:10:23.371 "r_mbytes_per_sec": 0, 00:10:23.371 "w_mbytes_per_sec": 0 00:10:23.371 }, 00:10:23.371 "claimed": false, 00:10:23.371 "zoned": false, 00:10:23.371 "supported_io_types": { 00:10:23.371 "read": true, 00:10:23.371 "write": true, 00:10:23.371 "unmap": false, 00:10:23.371 "flush": false, 00:10:23.371 "reset": true, 00:10:23.371 "nvme_admin": false, 00:10:23.371 "nvme_io": false, 00:10:23.371 "nvme_io_md": false, 00:10:23.371 "write_zeroes": true, 00:10:23.371 "zcopy": false, 00:10:23.371 "get_zone_info": false, 00:10:23.371 "zone_management": false, 00:10:23.371 "zone_append": false, 00:10:23.371 "compare": false, 00:10:23.371 "compare_and_write": false, 00:10:23.371 "abort": false, 00:10:23.371 "seek_hole": false, 00:10:23.371 "seek_data": false, 00:10:23.371 "copy": false, 00:10:23.371 "nvme_iov_md": false 00:10:23.371 }, 00:10:23.371 "memory_domains": [ 00:10:23.371 { 00:10:23.371 "dma_device_id": "system", 00:10:23.371 "dma_device_type": 1 00:10:23.371 }, 00:10:23.371 { 00:10:23.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.371 "dma_device_type": 2 00:10:23.371 }, 00:10:23.371 { 00:10:23.371 "dma_device_id": "system", 00:10:23.371 "dma_device_type": 1 00:10:23.371 }, 00:10:23.371 { 00:10:23.371 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.371 "dma_device_type": 2 00:10:23.371 }, 00:10:23.371 { 00:10:23.371 "dma_device_id": "system", 00:10:23.371 "dma_device_type": 1 00:10:23.371 }, 00:10:23.371 { 00:10:23.372 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:23.372 "dma_device_type": 2 00:10:23.372 } 00:10:23.372 ], 00:10:23.372 "driver_specific": { 00:10:23.372 "raid": { 00:10:23.372 "uuid": "403596ea-7568-494d-b7e7-aef182f59bc4", 00:10:23.372 "strip_size_kb": 0, 00:10:23.372 "state": "online", 00:10:23.372 "raid_level": "raid1", 00:10:23.372 "superblock": false, 00:10:23.372 "num_base_bdevs": 3, 00:10:23.372 "num_base_bdevs_discovered": 3, 00:10:23.372 "num_base_bdevs_operational": 3, 00:10:23.372 "base_bdevs_list": [ 00:10:23.372 { 00:10:23.372 "name": "BaseBdev1", 00:10:23.372 "uuid": "0d3ef5ba-357e-44f7-851f-9ab386e983ca", 00:10:23.372 "is_configured": true, 00:10:23.372 "data_offset": 0, 00:10:23.372 "data_size": 65536 00:10:23.372 }, 00:10:23.372 { 00:10:23.372 "name": "BaseBdev2", 00:10:23.372 "uuid": "6bed63c6-bfeb-4a92-87ff-0097d3067a43", 00:10:23.372 "is_configured": true, 00:10:23.372 "data_offset": 0, 00:10:23.372 "data_size": 65536 00:10:23.372 }, 00:10:23.372 { 00:10:23.372 "name": "BaseBdev3", 00:10:23.372 "uuid": "6f0866a5-0ec1-4182-a07f-8d3b6faef9a5", 00:10:23.372 "is_configured": true, 00:10:23.372 "data_offset": 0, 00:10:23.372 "data_size": 65536 00:10:23.372 } 00:10:23.372 ] 00:10:23.372 } 00:10:23.372 } 00:10:23.372 }' 00:10:23.372 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:23.632 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:23.632 BaseBdev2 00:10:23.632 BaseBdev3' 00:10:23.632 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.632 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:23.632 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.632 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:23.632 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.632 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.632 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.632 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.632 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.632 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.632 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.632 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.632 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:23.632 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.632 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.632 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.632 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.632 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.632 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:23.632 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:23.632 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.632 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.632 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:23.632 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.632 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:23.632 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:23.632 03:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:23.632 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.632 03:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.632 [2024-10-09 03:13:06.896524] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:23.892 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.892 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:23.892 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:23.892 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:23.892 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:23.892 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:23.892 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:23.892 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:23.892 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:23.892 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:23.892 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:23.892 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:23.892 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:23.892 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:23.892 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:23.892 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:23.892 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:23.892 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:23.892 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.892 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.892 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.892 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:23.892 "name": "Existed_Raid", 00:10:23.892 "uuid": "403596ea-7568-494d-b7e7-aef182f59bc4", 00:10:23.892 "strip_size_kb": 0, 00:10:23.892 "state": "online", 00:10:23.892 "raid_level": "raid1", 00:10:23.892 "superblock": false, 00:10:23.892 "num_base_bdevs": 3, 00:10:23.892 "num_base_bdevs_discovered": 2, 00:10:23.892 "num_base_bdevs_operational": 2, 00:10:23.892 "base_bdevs_list": [ 00:10:23.892 { 00:10:23.892 "name": null, 00:10:23.892 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:23.892 "is_configured": false, 00:10:23.892 "data_offset": 0, 00:10:23.892 "data_size": 65536 00:10:23.892 }, 00:10:23.892 { 00:10:23.892 "name": "BaseBdev2", 00:10:23.892 "uuid": "6bed63c6-bfeb-4a92-87ff-0097d3067a43", 00:10:23.892 "is_configured": true, 00:10:23.892 "data_offset": 0, 00:10:23.892 "data_size": 65536 00:10:23.892 }, 00:10:23.892 { 00:10:23.892 "name": "BaseBdev3", 00:10:23.892 "uuid": "6f0866a5-0ec1-4182-a07f-8d3b6faef9a5", 00:10:23.892 "is_configured": true, 00:10:23.892 "data_offset": 0, 00:10:23.892 "data_size": 65536 00:10:23.892 } 00:10:23.892 ] 00:10:23.892 }' 00:10:23.892 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:23.892 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.152 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:24.152 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:24.152 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.152 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:24.152 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.152 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.152 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.412 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:24.412 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:24.412 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:24.412 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.412 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.412 [2024-10-09 03:13:07.463395] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:24.412 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.412 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:24.412 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:24.412 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.412 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.412 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:24.412 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.412 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.412 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:24.412 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:24.412 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:24.412 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.412 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.412 [2024-10-09 03:13:07.625087] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:24.412 [2024-10-09 03:13:07.625211] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:24.672 [2024-10-09 03:13:07.729001] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:24.672 [2024-10-09 03:13:07.729130] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:24.672 [2024-10-09 03:13:07.729173] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.672 BaseBdev2 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.672 [ 00:10:24.672 { 00:10:24.672 "name": "BaseBdev2", 00:10:24.672 "aliases": [ 00:10:24.672 "37b379e6-d1cc-462b-8b1a-751f3ecf04f2" 00:10:24.672 ], 00:10:24.672 "product_name": "Malloc disk", 00:10:24.672 "block_size": 512, 00:10:24.672 "num_blocks": 65536, 00:10:24.672 "uuid": "37b379e6-d1cc-462b-8b1a-751f3ecf04f2", 00:10:24.672 "assigned_rate_limits": { 00:10:24.672 "rw_ios_per_sec": 0, 00:10:24.672 "rw_mbytes_per_sec": 0, 00:10:24.672 "r_mbytes_per_sec": 0, 00:10:24.672 "w_mbytes_per_sec": 0 00:10:24.672 }, 00:10:24.672 "claimed": false, 00:10:24.672 "zoned": false, 00:10:24.672 "supported_io_types": { 00:10:24.672 "read": true, 00:10:24.672 "write": true, 00:10:24.672 "unmap": true, 00:10:24.672 "flush": true, 00:10:24.672 "reset": true, 00:10:24.672 "nvme_admin": false, 00:10:24.672 "nvme_io": false, 00:10:24.672 "nvme_io_md": false, 00:10:24.672 "write_zeroes": true, 00:10:24.672 "zcopy": true, 00:10:24.672 "get_zone_info": false, 00:10:24.672 "zone_management": false, 00:10:24.672 "zone_append": false, 00:10:24.672 "compare": false, 00:10:24.672 "compare_and_write": false, 00:10:24.672 "abort": true, 00:10:24.672 "seek_hole": false, 00:10:24.672 "seek_data": false, 00:10:24.672 "copy": true, 00:10:24.672 "nvme_iov_md": false 00:10:24.672 }, 00:10:24.672 "memory_domains": [ 00:10:24.672 { 00:10:24.672 "dma_device_id": "system", 00:10:24.672 "dma_device_type": 1 00:10:24.672 }, 00:10:24.672 { 00:10:24.672 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.672 "dma_device_type": 2 00:10:24.672 } 00:10:24.672 ], 00:10:24.672 "driver_specific": {} 00:10:24.672 } 00:10:24.672 ] 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.672 BaseBdev3 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.672 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.672 [ 00:10:24.672 { 00:10:24.672 "name": "BaseBdev3", 00:10:24.672 "aliases": [ 00:10:24.672 "5c3839e1-e97e-4e7d-b805-1e3ec19be37e" 00:10:24.672 ], 00:10:24.672 "product_name": "Malloc disk", 00:10:24.672 "block_size": 512, 00:10:24.672 "num_blocks": 65536, 00:10:24.672 "uuid": "5c3839e1-e97e-4e7d-b805-1e3ec19be37e", 00:10:24.672 "assigned_rate_limits": { 00:10:24.672 "rw_ios_per_sec": 0, 00:10:24.672 "rw_mbytes_per_sec": 0, 00:10:24.672 "r_mbytes_per_sec": 0, 00:10:24.672 "w_mbytes_per_sec": 0 00:10:24.672 }, 00:10:24.672 "claimed": false, 00:10:24.672 "zoned": false, 00:10:24.672 "supported_io_types": { 00:10:24.672 "read": true, 00:10:24.672 "write": true, 00:10:24.672 "unmap": true, 00:10:24.672 "flush": true, 00:10:24.672 "reset": true, 00:10:24.672 "nvme_admin": false, 00:10:24.672 "nvme_io": false, 00:10:24.672 "nvme_io_md": false, 00:10:24.672 "write_zeroes": true, 00:10:24.673 "zcopy": true, 00:10:24.673 "get_zone_info": false, 00:10:24.673 "zone_management": false, 00:10:24.673 "zone_append": false, 00:10:24.673 "compare": false, 00:10:24.673 "compare_and_write": false, 00:10:24.673 "abort": true, 00:10:24.673 "seek_hole": false, 00:10:24.673 "seek_data": false, 00:10:24.673 "copy": true, 00:10:24.673 "nvme_iov_md": false 00:10:24.673 }, 00:10:24.673 "memory_domains": [ 00:10:24.673 { 00:10:24.673 "dma_device_id": "system", 00:10:24.673 "dma_device_type": 1 00:10:24.673 }, 00:10:24.673 { 00:10:24.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:24.673 "dma_device_type": 2 00:10:24.673 } 00:10:24.673 ], 00:10:24.673 "driver_specific": {} 00:10:24.673 } 00:10:24.673 ] 00:10:24.673 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.673 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:24.673 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:24.673 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:24.673 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:24.673 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.673 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.673 [2024-10-09 03:13:07.944500] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:24.673 [2024-10-09 03:13:07.944619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:24.673 [2024-10-09 03:13:07.944677] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:24.673 [2024-10-09 03:13:07.946770] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:24.673 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.673 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:24.673 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:24.673 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:24.673 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:24.673 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:24.673 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:24.673 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.673 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.673 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.673 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.673 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.673 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:24.673 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.673 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.673 03:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.933 03:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.933 "name": "Existed_Raid", 00:10:24.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.933 "strip_size_kb": 0, 00:10:24.933 "state": "configuring", 00:10:24.933 "raid_level": "raid1", 00:10:24.933 "superblock": false, 00:10:24.933 "num_base_bdevs": 3, 00:10:24.933 "num_base_bdevs_discovered": 2, 00:10:24.933 "num_base_bdevs_operational": 3, 00:10:24.933 "base_bdevs_list": [ 00:10:24.933 { 00:10:24.933 "name": "BaseBdev1", 00:10:24.933 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:24.933 "is_configured": false, 00:10:24.933 "data_offset": 0, 00:10:24.933 "data_size": 0 00:10:24.933 }, 00:10:24.933 { 00:10:24.933 "name": "BaseBdev2", 00:10:24.933 "uuid": "37b379e6-d1cc-462b-8b1a-751f3ecf04f2", 00:10:24.933 "is_configured": true, 00:10:24.933 "data_offset": 0, 00:10:24.933 "data_size": 65536 00:10:24.933 }, 00:10:24.933 { 00:10:24.933 "name": "BaseBdev3", 00:10:24.933 "uuid": "5c3839e1-e97e-4e7d-b805-1e3ec19be37e", 00:10:24.933 "is_configured": true, 00:10:24.933 "data_offset": 0, 00:10:24.933 "data_size": 65536 00:10:24.933 } 00:10:24.933 ] 00:10:24.933 }' 00:10:24.933 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.933 03:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.192 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:25.192 03:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.192 03:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.192 [2024-10-09 03:13:08.371789] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:25.192 03:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.192 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:25.192 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.192 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.192 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.192 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.192 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.192 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.192 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.192 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.192 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.192 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.192 03:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.192 03:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.192 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.192 03:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.192 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.192 "name": "Existed_Raid", 00:10:25.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.192 "strip_size_kb": 0, 00:10:25.192 "state": "configuring", 00:10:25.192 "raid_level": "raid1", 00:10:25.192 "superblock": false, 00:10:25.192 "num_base_bdevs": 3, 00:10:25.192 "num_base_bdevs_discovered": 1, 00:10:25.192 "num_base_bdevs_operational": 3, 00:10:25.192 "base_bdevs_list": [ 00:10:25.192 { 00:10:25.192 "name": "BaseBdev1", 00:10:25.192 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.192 "is_configured": false, 00:10:25.192 "data_offset": 0, 00:10:25.192 "data_size": 0 00:10:25.192 }, 00:10:25.192 { 00:10:25.192 "name": null, 00:10:25.192 "uuid": "37b379e6-d1cc-462b-8b1a-751f3ecf04f2", 00:10:25.192 "is_configured": false, 00:10:25.192 "data_offset": 0, 00:10:25.192 "data_size": 65536 00:10:25.192 }, 00:10:25.192 { 00:10:25.192 "name": "BaseBdev3", 00:10:25.192 "uuid": "5c3839e1-e97e-4e7d-b805-1e3ec19be37e", 00:10:25.192 "is_configured": true, 00:10:25.192 "data_offset": 0, 00:10:25.192 "data_size": 65536 00:10:25.192 } 00:10:25.192 ] 00:10:25.192 }' 00:10:25.192 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.192 03:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.763 [2024-10-09 03:13:08.865071] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:25.763 BaseBdev1 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.763 [ 00:10:25.763 { 00:10:25.763 "name": "BaseBdev1", 00:10:25.763 "aliases": [ 00:10:25.763 "7d23069c-aaad-4e35-88e5-28385ab28d51" 00:10:25.763 ], 00:10:25.763 "product_name": "Malloc disk", 00:10:25.763 "block_size": 512, 00:10:25.763 "num_blocks": 65536, 00:10:25.763 "uuid": "7d23069c-aaad-4e35-88e5-28385ab28d51", 00:10:25.763 "assigned_rate_limits": { 00:10:25.763 "rw_ios_per_sec": 0, 00:10:25.763 "rw_mbytes_per_sec": 0, 00:10:25.763 "r_mbytes_per_sec": 0, 00:10:25.763 "w_mbytes_per_sec": 0 00:10:25.763 }, 00:10:25.763 "claimed": true, 00:10:25.763 "claim_type": "exclusive_write", 00:10:25.763 "zoned": false, 00:10:25.763 "supported_io_types": { 00:10:25.763 "read": true, 00:10:25.763 "write": true, 00:10:25.763 "unmap": true, 00:10:25.763 "flush": true, 00:10:25.763 "reset": true, 00:10:25.763 "nvme_admin": false, 00:10:25.763 "nvme_io": false, 00:10:25.763 "nvme_io_md": false, 00:10:25.763 "write_zeroes": true, 00:10:25.763 "zcopy": true, 00:10:25.763 "get_zone_info": false, 00:10:25.763 "zone_management": false, 00:10:25.763 "zone_append": false, 00:10:25.763 "compare": false, 00:10:25.763 "compare_and_write": false, 00:10:25.763 "abort": true, 00:10:25.763 "seek_hole": false, 00:10:25.763 "seek_data": false, 00:10:25.763 "copy": true, 00:10:25.763 "nvme_iov_md": false 00:10:25.763 }, 00:10:25.763 "memory_domains": [ 00:10:25.763 { 00:10:25.763 "dma_device_id": "system", 00:10:25.763 "dma_device_type": 1 00:10:25.763 }, 00:10:25.763 { 00:10:25.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:25.763 "dma_device_type": 2 00:10:25.763 } 00:10:25.763 ], 00:10:25.763 "driver_specific": {} 00:10:25.763 } 00:10:25.763 ] 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:25.763 "name": "Existed_Raid", 00:10:25.763 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.763 "strip_size_kb": 0, 00:10:25.763 "state": "configuring", 00:10:25.763 "raid_level": "raid1", 00:10:25.763 "superblock": false, 00:10:25.763 "num_base_bdevs": 3, 00:10:25.763 "num_base_bdevs_discovered": 2, 00:10:25.763 "num_base_bdevs_operational": 3, 00:10:25.763 "base_bdevs_list": [ 00:10:25.763 { 00:10:25.763 "name": "BaseBdev1", 00:10:25.763 "uuid": "7d23069c-aaad-4e35-88e5-28385ab28d51", 00:10:25.763 "is_configured": true, 00:10:25.763 "data_offset": 0, 00:10:25.763 "data_size": 65536 00:10:25.763 }, 00:10:25.763 { 00:10:25.763 "name": null, 00:10:25.763 "uuid": "37b379e6-d1cc-462b-8b1a-751f3ecf04f2", 00:10:25.763 "is_configured": false, 00:10:25.763 "data_offset": 0, 00:10:25.763 "data_size": 65536 00:10:25.763 }, 00:10:25.763 { 00:10:25.763 "name": "BaseBdev3", 00:10:25.763 "uuid": "5c3839e1-e97e-4e7d-b805-1e3ec19be37e", 00:10:25.763 "is_configured": true, 00:10:25.763 "data_offset": 0, 00:10:25.763 "data_size": 65536 00:10:25.763 } 00:10:25.763 ] 00:10:25.763 }' 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:25.763 03:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.023 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.023 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:26.023 03:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.023 03:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.283 03:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.283 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:26.283 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:26.283 03:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.283 03:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.283 [2024-10-09 03:13:09.372350] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:26.283 03:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.283 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:26.283 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.283 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.283 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.283 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.283 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.283 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.283 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.283 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.283 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.283 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.283 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.283 03:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.283 03:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.283 03:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.283 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.283 "name": "Existed_Raid", 00:10:26.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.283 "strip_size_kb": 0, 00:10:26.283 "state": "configuring", 00:10:26.283 "raid_level": "raid1", 00:10:26.283 "superblock": false, 00:10:26.283 "num_base_bdevs": 3, 00:10:26.283 "num_base_bdevs_discovered": 1, 00:10:26.283 "num_base_bdevs_operational": 3, 00:10:26.283 "base_bdevs_list": [ 00:10:26.283 { 00:10:26.283 "name": "BaseBdev1", 00:10:26.283 "uuid": "7d23069c-aaad-4e35-88e5-28385ab28d51", 00:10:26.283 "is_configured": true, 00:10:26.283 "data_offset": 0, 00:10:26.283 "data_size": 65536 00:10:26.283 }, 00:10:26.283 { 00:10:26.283 "name": null, 00:10:26.283 "uuid": "37b379e6-d1cc-462b-8b1a-751f3ecf04f2", 00:10:26.283 "is_configured": false, 00:10:26.283 "data_offset": 0, 00:10:26.283 "data_size": 65536 00:10:26.283 }, 00:10:26.283 { 00:10:26.283 "name": null, 00:10:26.283 "uuid": "5c3839e1-e97e-4e7d-b805-1e3ec19be37e", 00:10:26.283 "is_configured": false, 00:10:26.283 "data_offset": 0, 00:10:26.283 "data_size": 65536 00:10:26.283 } 00:10:26.283 ] 00:10:26.283 }' 00:10:26.283 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.283 03:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.543 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:26.802 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.802 03:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.802 03:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.802 03:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.803 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:26.803 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:26.803 03:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.803 03:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.803 [2024-10-09 03:13:09.895474] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:26.803 03:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.803 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:26.803 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:26.803 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:26.803 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:26.803 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:26.803 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.803 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.803 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.803 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.803 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.803 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.803 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:26.803 03:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:26.803 03:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.803 03:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:26.803 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.803 "name": "Existed_Raid", 00:10:26.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:26.803 "strip_size_kb": 0, 00:10:26.803 "state": "configuring", 00:10:26.803 "raid_level": "raid1", 00:10:26.803 "superblock": false, 00:10:26.803 "num_base_bdevs": 3, 00:10:26.803 "num_base_bdevs_discovered": 2, 00:10:26.803 "num_base_bdevs_operational": 3, 00:10:26.803 "base_bdevs_list": [ 00:10:26.803 { 00:10:26.803 "name": "BaseBdev1", 00:10:26.803 "uuid": "7d23069c-aaad-4e35-88e5-28385ab28d51", 00:10:26.803 "is_configured": true, 00:10:26.803 "data_offset": 0, 00:10:26.803 "data_size": 65536 00:10:26.803 }, 00:10:26.803 { 00:10:26.803 "name": null, 00:10:26.803 "uuid": "37b379e6-d1cc-462b-8b1a-751f3ecf04f2", 00:10:26.803 "is_configured": false, 00:10:26.803 "data_offset": 0, 00:10:26.803 "data_size": 65536 00:10:26.803 }, 00:10:26.803 { 00:10:26.803 "name": "BaseBdev3", 00:10:26.803 "uuid": "5c3839e1-e97e-4e7d-b805-1e3ec19be37e", 00:10:26.803 "is_configured": true, 00:10:26.803 "data_offset": 0, 00:10:26.803 "data_size": 65536 00:10:26.803 } 00:10:26.803 ] 00:10:26.803 }' 00:10:26.803 03:13:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.803 03:13:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.063 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.063 03:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.063 03:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.063 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:27.063 03:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.323 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:27.323 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:27.323 03:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.323 03:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.323 [2024-10-09 03:13:10.398719] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:27.323 03:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.323 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:27.323 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.323 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.323 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.323 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.323 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.323 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.323 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.323 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.323 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.323 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.323 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.323 03:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.323 03:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.323 03:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.323 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.323 "name": "Existed_Raid", 00:10:27.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.323 "strip_size_kb": 0, 00:10:27.323 "state": "configuring", 00:10:27.323 "raid_level": "raid1", 00:10:27.323 "superblock": false, 00:10:27.323 "num_base_bdevs": 3, 00:10:27.323 "num_base_bdevs_discovered": 1, 00:10:27.323 "num_base_bdevs_operational": 3, 00:10:27.323 "base_bdevs_list": [ 00:10:27.323 { 00:10:27.323 "name": null, 00:10:27.323 "uuid": "7d23069c-aaad-4e35-88e5-28385ab28d51", 00:10:27.323 "is_configured": false, 00:10:27.323 "data_offset": 0, 00:10:27.323 "data_size": 65536 00:10:27.323 }, 00:10:27.323 { 00:10:27.323 "name": null, 00:10:27.323 "uuid": "37b379e6-d1cc-462b-8b1a-751f3ecf04f2", 00:10:27.323 "is_configured": false, 00:10:27.323 "data_offset": 0, 00:10:27.323 "data_size": 65536 00:10:27.323 }, 00:10:27.323 { 00:10:27.323 "name": "BaseBdev3", 00:10:27.323 "uuid": "5c3839e1-e97e-4e7d-b805-1e3ec19be37e", 00:10:27.323 "is_configured": true, 00:10:27.323 "data_offset": 0, 00:10:27.323 "data_size": 65536 00:10:27.323 } 00:10:27.323 ] 00:10:27.323 }' 00:10:27.323 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.323 03:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.899 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:27.899 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.899 03:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.899 03:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.900 03:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.900 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:27.900 03:13:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:27.900 03:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.900 03:13:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.900 [2024-10-09 03:13:11.006567] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:27.900 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.900 03:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:27.900 03:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:27.900 03:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:27.900 03:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:27.900 03:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:27.900 03:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:27.900 03:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:27.900 03:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:27.900 03:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:27.900 03:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:27.900 03:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:27.900 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:27.900 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.900 03:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.900 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:27.900 03:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:27.900 "name": "Existed_Raid", 00:10:27.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.900 "strip_size_kb": 0, 00:10:27.900 "state": "configuring", 00:10:27.900 "raid_level": "raid1", 00:10:27.900 "superblock": false, 00:10:27.900 "num_base_bdevs": 3, 00:10:27.900 "num_base_bdevs_discovered": 2, 00:10:27.900 "num_base_bdevs_operational": 3, 00:10:27.900 "base_bdevs_list": [ 00:10:27.900 { 00:10:27.900 "name": null, 00:10:27.900 "uuid": "7d23069c-aaad-4e35-88e5-28385ab28d51", 00:10:27.900 "is_configured": false, 00:10:27.900 "data_offset": 0, 00:10:27.900 "data_size": 65536 00:10:27.900 }, 00:10:27.900 { 00:10:27.900 "name": "BaseBdev2", 00:10:27.900 "uuid": "37b379e6-d1cc-462b-8b1a-751f3ecf04f2", 00:10:27.900 "is_configured": true, 00:10:27.900 "data_offset": 0, 00:10:27.900 "data_size": 65536 00:10:27.900 }, 00:10:27.900 { 00:10:27.900 "name": "BaseBdev3", 00:10:27.900 "uuid": "5c3839e1-e97e-4e7d-b805-1e3ec19be37e", 00:10:27.900 "is_configured": true, 00:10:27.900 "data_offset": 0, 00:10:27.900 "data_size": 65536 00:10:27.900 } 00:10:27.900 ] 00:10:27.900 }' 00:10:27.900 03:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:27.900 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.159 03:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.159 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.159 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.159 03:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:28.159 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 7d23069c-aaad-4e35-88e5-28385ab28d51 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.419 [2024-10-09 03:13:11.589075] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:28.419 [2024-10-09 03:13:11.589220] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:28.419 [2024-10-09 03:13:11.589245] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:10:28.419 [2024-10-09 03:13:11.589556] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:28.419 [2024-10-09 03:13:11.589789] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:28.419 [2024-10-09 03:13:11.589836] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:28.419 [2024-10-09 03:13:11.590167] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:28.419 NewBaseBdev 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.419 [ 00:10:28.419 { 00:10:28.419 "name": "NewBaseBdev", 00:10:28.419 "aliases": [ 00:10:28.419 "7d23069c-aaad-4e35-88e5-28385ab28d51" 00:10:28.419 ], 00:10:28.419 "product_name": "Malloc disk", 00:10:28.419 "block_size": 512, 00:10:28.419 "num_blocks": 65536, 00:10:28.419 "uuid": "7d23069c-aaad-4e35-88e5-28385ab28d51", 00:10:28.419 "assigned_rate_limits": { 00:10:28.419 "rw_ios_per_sec": 0, 00:10:28.419 "rw_mbytes_per_sec": 0, 00:10:28.419 "r_mbytes_per_sec": 0, 00:10:28.419 "w_mbytes_per_sec": 0 00:10:28.419 }, 00:10:28.419 "claimed": true, 00:10:28.419 "claim_type": "exclusive_write", 00:10:28.419 "zoned": false, 00:10:28.419 "supported_io_types": { 00:10:28.419 "read": true, 00:10:28.419 "write": true, 00:10:28.419 "unmap": true, 00:10:28.419 "flush": true, 00:10:28.419 "reset": true, 00:10:28.419 "nvme_admin": false, 00:10:28.419 "nvme_io": false, 00:10:28.419 "nvme_io_md": false, 00:10:28.419 "write_zeroes": true, 00:10:28.419 "zcopy": true, 00:10:28.419 "get_zone_info": false, 00:10:28.419 "zone_management": false, 00:10:28.419 "zone_append": false, 00:10:28.419 "compare": false, 00:10:28.419 "compare_and_write": false, 00:10:28.419 "abort": true, 00:10:28.419 "seek_hole": false, 00:10:28.419 "seek_data": false, 00:10:28.419 "copy": true, 00:10:28.419 "nvme_iov_md": false 00:10:28.419 }, 00:10:28.419 "memory_domains": [ 00:10:28.419 { 00:10:28.419 "dma_device_id": "system", 00:10:28.419 "dma_device_type": 1 00:10:28.419 }, 00:10:28.419 { 00:10:28.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.419 "dma_device_type": 2 00:10:28.419 } 00:10:28.419 ], 00:10:28.419 "driver_specific": {} 00:10:28.419 } 00:10:28.419 ] 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.419 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.420 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.420 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.420 03:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:28.420 "name": "Existed_Raid", 00:10:28.420 "uuid": "74b22905-14a6-4772-9c57-54ac4269790d", 00:10:28.420 "strip_size_kb": 0, 00:10:28.420 "state": "online", 00:10:28.420 "raid_level": "raid1", 00:10:28.420 "superblock": false, 00:10:28.420 "num_base_bdevs": 3, 00:10:28.420 "num_base_bdevs_discovered": 3, 00:10:28.420 "num_base_bdevs_operational": 3, 00:10:28.420 "base_bdevs_list": [ 00:10:28.420 { 00:10:28.420 "name": "NewBaseBdev", 00:10:28.420 "uuid": "7d23069c-aaad-4e35-88e5-28385ab28d51", 00:10:28.420 "is_configured": true, 00:10:28.420 "data_offset": 0, 00:10:28.420 "data_size": 65536 00:10:28.420 }, 00:10:28.420 { 00:10:28.420 "name": "BaseBdev2", 00:10:28.420 "uuid": "37b379e6-d1cc-462b-8b1a-751f3ecf04f2", 00:10:28.420 "is_configured": true, 00:10:28.420 "data_offset": 0, 00:10:28.420 "data_size": 65536 00:10:28.420 }, 00:10:28.420 { 00:10:28.420 "name": "BaseBdev3", 00:10:28.420 "uuid": "5c3839e1-e97e-4e7d-b805-1e3ec19be37e", 00:10:28.420 "is_configured": true, 00:10:28.420 "data_offset": 0, 00:10:28.420 "data_size": 65536 00:10:28.420 } 00:10:28.420 ] 00:10:28.420 }' 00:10:28.420 03:13:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:28.420 03:13:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.990 03:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:28.990 03:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:28.990 03:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:28.990 03:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:28.990 03:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:28.990 03:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:28.990 03:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:28.990 03:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:28.990 03:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.990 03:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.990 [2024-10-09 03:13:12.064707] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:28.990 03:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.990 03:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:28.990 "name": "Existed_Raid", 00:10:28.990 "aliases": [ 00:10:28.990 "74b22905-14a6-4772-9c57-54ac4269790d" 00:10:28.990 ], 00:10:28.990 "product_name": "Raid Volume", 00:10:28.990 "block_size": 512, 00:10:28.990 "num_blocks": 65536, 00:10:28.990 "uuid": "74b22905-14a6-4772-9c57-54ac4269790d", 00:10:28.990 "assigned_rate_limits": { 00:10:28.990 "rw_ios_per_sec": 0, 00:10:28.990 "rw_mbytes_per_sec": 0, 00:10:28.990 "r_mbytes_per_sec": 0, 00:10:28.990 "w_mbytes_per_sec": 0 00:10:28.990 }, 00:10:28.990 "claimed": false, 00:10:28.990 "zoned": false, 00:10:28.990 "supported_io_types": { 00:10:28.990 "read": true, 00:10:28.990 "write": true, 00:10:28.990 "unmap": false, 00:10:28.990 "flush": false, 00:10:28.990 "reset": true, 00:10:28.990 "nvme_admin": false, 00:10:28.990 "nvme_io": false, 00:10:28.990 "nvme_io_md": false, 00:10:28.990 "write_zeroes": true, 00:10:28.990 "zcopy": false, 00:10:28.990 "get_zone_info": false, 00:10:28.990 "zone_management": false, 00:10:28.990 "zone_append": false, 00:10:28.990 "compare": false, 00:10:28.990 "compare_and_write": false, 00:10:28.990 "abort": false, 00:10:28.990 "seek_hole": false, 00:10:28.990 "seek_data": false, 00:10:28.990 "copy": false, 00:10:28.990 "nvme_iov_md": false 00:10:28.990 }, 00:10:28.990 "memory_domains": [ 00:10:28.990 { 00:10:28.990 "dma_device_id": "system", 00:10:28.990 "dma_device_type": 1 00:10:28.990 }, 00:10:28.990 { 00:10:28.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.990 "dma_device_type": 2 00:10:28.990 }, 00:10:28.990 { 00:10:28.990 "dma_device_id": "system", 00:10:28.990 "dma_device_type": 1 00:10:28.990 }, 00:10:28.990 { 00:10:28.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.990 "dma_device_type": 2 00:10:28.990 }, 00:10:28.990 { 00:10:28.990 "dma_device_id": "system", 00:10:28.990 "dma_device_type": 1 00:10:28.990 }, 00:10:28.990 { 00:10:28.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:28.990 "dma_device_type": 2 00:10:28.990 } 00:10:28.990 ], 00:10:28.990 "driver_specific": { 00:10:28.990 "raid": { 00:10:28.990 "uuid": "74b22905-14a6-4772-9c57-54ac4269790d", 00:10:28.990 "strip_size_kb": 0, 00:10:28.990 "state": "online", 00:10:28.990 "raid_level": "raid1", 00:10:28.990 "superblock": false, 00:10:28.990 "num_base_bdevs": 3, 00:10:28.990 "num_base_bdevs_discovered": 3, 00:10:28.990 "num_base_bdevs_operational": 3, 00:10:28.990 "base_bdevs_list": [ 00:10:28.990 { 00:10:28.990 "name": "NewBaseBdev", 00:10:28.990 "uuid": "7d23069c-aaad-4e35-88e5-28385ab28d51", 00:10:28.990 "is_configured": true, 00:10:28.990 "data_offset": 0, 00:10:28.990 "data_size": 65536 00:10:28.990 }, 00:10:28.990 { 00:10:28.990 "name": "BaseBdev2", 00:10:28.990 "uuid": "37b379e6-d1cc-462b-8b1a-751f3ecf04f2", 00:10:28.990 "is_configured": true, 00:10:28.990 "data_offset": 0, 00:10:28.990 "data_size": 65536 00:10:28.990 }, 00:10:28.990 { 00:10:28.990 "name": "BaseBdev3", 00:10:28.990 "uuid": "5c3839e1-e97e-4e7d-b805-1e3ec19be37e", 00:10:28.990 "is_configured": true, 00:10:28.990 "data_offset": 0, 00:10:28.990 "data_size": 65536 00:10:28.990 } 00:10:28.990 ] 00:10:28.990 } 00:10:28.990 } 00:10:28.990 }' 00:10:28.990 03:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:28.990 03:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:28.990 BaseBdev2 00:10:28.990 BaseBdev3' 00:10:28.990 03:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.990 03:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:28.990 03:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.990 03:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:28.990 03:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.990 03:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.990 03:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.990 03:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.990 03:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.990 03:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.990 03:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.990 03:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:28.990 03:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.990 03:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.990 03:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.990 03:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.990 03:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:28.990 03:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:28.990 03:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:28.990 03:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:28.990 03:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.990 03:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.991 03:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:28.991 03:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.250 03:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:29.250 03:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:29.250 03:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:29.250 03:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.250 03:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.250 [2024-10-09 03:13:12.331899] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:29.250 [2024-10-09 03:13:12.331932] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:29.250 [2024-10-09 03:13:12.332007] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:29.250 [2024-10-09 03:13:12.332312] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:29.250 [2024-10-09 03:13:12.332322] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:29.250 03:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.250 03:13:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67521 00:10:29.250 03:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 67521 ']' 00:10:29.250 03:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 67521 00:10:29.250 03:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:10:29.250 03:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:29.250 03:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 67521 00:10:29.250 killing process with pid 67521 00:10:29.250 03:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:29.250 03:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:29.250 03:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 67521' 00:10:29.250 03:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 67521 00:10:29.250 [2024-10-09 03:13:12.369494] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:29.250 03:13:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 67521 00:10:29.510 [2024-10-09 03:13:12.698440] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:30.889 03:13:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:30.889 00:10:30.889 real 0m10.808s 00:10:30.889 user 0m16.779s 00:10:30.889 sys 0m2.025s 00:10:30.889 ************************************ 00:10:30.889 END TEST raid_state_function_test 00:10:30.889 ************************************ 00:10:30.889 03:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:30.889 03:13:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.889 03:13:14 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:10:30.889 03:13:14 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:30.889 03:13:14 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:30.889 03:13:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:30.889 ************************************ 00:10:30.889 START TEST raid_state_function_test_sb 00:10:30.889 ************************************ 00:10:30.889 03:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 3 true 00:10:30.890 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:10:30.890 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:30.890 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:30.890 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:30.890 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:30.890 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.890 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:30.890 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:30.890 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.890 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:30.890 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:30.890 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.890 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:30.890 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:30.890 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:30.890 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:30.890 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:30.890 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:30.890 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:30.890 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:30.890 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:30.890 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:10:30.890 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:10:30.890 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:30.890 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:30.890 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=68148 00:10:30.890 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:30.890 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 68148' 00:10:30.890 Process raid pid: 68148 00:10:30.890 03:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 68148 00:10:30.890 03:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 68148 ']' 00:10:30.890 03:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:30.890 03:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:30.890 03:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:30.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:30.890 03:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:30.890 03:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.148 [2024-10-09 03:13:14.240019] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:10:31.148 [2024-10-09 03:13:14.240217] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:31.148 [2024-10-09 03:13:14.404989] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:31.407 [2024-10-09 03:13:14.662942] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:31.667 [2024-10-09 03:13:14.916001] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.667 [2024-10-09 03:13:14.916152] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:31.926 03:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:31.926 03:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:10:31.926 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:31.926 03:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.926 03:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.926 [2024-10-09 03:13:15.085761] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:31.926 [2024-10-09 03:13:15.085836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:31.926 [2024-10-09 03:13:15.085869] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:31.926 [2024-10-09 03:13:15.085882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:31.926 [2024-10-09 03:13:15.085889] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:31.926 [2024-10-09 03:13:15.085899] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:31.926 03:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.926 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:31.926 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:31.926 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:31.926 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:31.926 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:31.926 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:31.926 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:31.926 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:31.926 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:31.926 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:31.926 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:31.926 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:31.926 03:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.926 03:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:31.926 03:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.926 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:31.926 "name": "Existed_Raid", 00:10:31.926 "uuid": "5eaf5052-2f9c-4f05-8d74-34f21931e7d4", 00:10:31.926 "strip_size_kb": 0, 00:10:31.926 "state": "configuring", 00:10:31.926 "raid_level": "raid1", 00:10:31.926 "superblock": true, 00:10:31.926 "num_base_bdevs": 3, 00:10:31.926 "num_base_bdevs_discovered": 0, 00:10:31.926 "num_base_bdevs_operational": 3, 00:10:31.926 "base_bdevs_list": [ 00:10:31.926 { 00:10:31.926 "name": "BaseBdev1", 00:10:31.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.926 "is_configured": false, 00:10:31.926 "data_offset": 0, 00:10:31.926 "data_size": 0 00:10:31.926 }, 00:10:31.926 { 00:10:31.926 "name": "BaseBdev2", 00:10:31.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.926 "is_configured": false, 00:10:31.926 "data_offset": 0, 00:10:31.926 "data_size": 0 00:10:31.926 }, 00:10:31.926 { 00:10:31.926 "name": "BaseBdev3", 00:10:31.926 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:31.926 "is_configured": false, 00:10:31.926 "data_offset": 0, 00:10:31.926 "data_size": 0 00:10:31.926 } 00:10:31.926 ] 00:10:31.926 }' 00:10:31.926 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:31.926 03:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.495 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:32.495 03:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.495 03:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.495 [2024-10-09 03:13:15.537055] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:32.495 [2024-10-09 03:13:15.537201] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.496 [2024-10-09 03:13:15.549094] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:32.496 [2024-10-09 03:13:15.549246] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:32.496 [2024-10-09 03:13:15.549276] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:32.496 [2024-10-09 03:13:15.549300] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:32.496 [2024-10-09 03:13:15.549319] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:32.496 [2024-10-09 03:13:15.549342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.496 [2024-10-09 03:13:15.613337] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:32.496 BaseBdev1 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.496 [ 00:10:32.496 { 00:10:32.496 "name": "BaseBdev1", 00:10:32.496 "aliases": [ 00:10:32.496 "a164ea2c-2469-4731-a322-f338f5704c3c" 00:10:32.496 ], 00:10:32.496 "product_name": "Malloc disk", 00:10:32.496 "block_size": 512, 00:10:32.496 "num_blocks": 65536, 00:10:32.496 "uuid": "a164ea2c-2469-4731-a322-f338f5704c3c", 00:10:32.496 "assigned_rate_limits": { 00:10:32.496 "rw_ios_per_sec": 0, 00:10:32.496 "rw_mbytes_per_sec": 0, 00:10:32.496 "r_mbytes_per_sec": 0, 00:10:32.496 "w_mbytes_per_sec": 0 00:10:32.496 }, 00:10:32.496 "claimed": true, 00:10:32.496 "claim_type": "exclusive_write", 00:10:32.496 "zoned": false, 00:10:32.496 "supported_io_types": { 00:10:32.496 "read": true, 00:10:32.496 "write": true, 00:10:32.496 "unmap": true, 00:10:32.496 "flush": true, 00:10:32.496 "reset": true, 00:10:32.496 "nvme_admin": false, 00:10:32.496 "nvme_io": false, 00:10:32.496 "nvme_io_md": false, 00:10:32.496 "write_zeroes": true, 00:10:32.496 "zcopy": true, 00:10:32.496 "get_zone_info": false, 00:10:32.496 "zone_management": false, 00:10:32.496 "zone_append": false, 00:10:32.496 "compare": false, 00:10:32.496 "compare_and_write": false, 00:10:32.496 "abort": true, 00:10:32.496 "seek_hole": false, 00:10:32.496 "seek_data": false, 00:10:32.496 "copy": true, 00:10:32.496 "nvme_iov_md": false 00:10:32.496 }, 00:10:32.496 "memory_domains": [ 00:10:32.496 { 00:10:32.496 "dma_device_id": "system", 00:10:32.496 "dma_device_type": 1 00:10:32.496 }, 00:10:32.496 { 00:10:32.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.496 "dma_device_type": 2 00:10:32.496 } 00:10:32.496 ], 00:10:32.496 "driver_specific": {} 00:10:32.496 } 00:10:32.496 ] 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:32.496 "name": "Existed_Raid", 00:10:32.496 "uuid": "61bddf2f-ebc2-441f-b854-ea5217b60cba", 00:10:32.496 "strip_size_kb": 0, 00:10:32.496 "state": "configuring", 00:10:32.496 "raid_level": "raid1", 00:10:32.496 "superblock": true, 00:10:32.496 "num_base_bdevs": 3, 00:10:32.496 "num_base_bdevs_discovered": 1, 00:10:32.496 "num_base_bdevs_operational": 3, 00:10:32.496 "base_bdevs_list": [ 00:10:32.496 { 00:10:32.496 "name": "BaseBdev1", 00:10:32.496 "uuid": "a164ea2c-2469-4731-a322-f338f5704c3c", 00:10:32.496 "is_configured": true, 00:10:32.496 "data_offset": 2048, 00:10:32.496 "data_size": 63488 00:10:32.496 }, 00:10:32.496 { 00:10:32.496 "name": "BaseBdev2", 00:10:32.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.496 "is_configured": false, 00:10:32.496 "data_offset": 0, 00:10:32.496 "data_size": 0 00:10:32.496 }, 00:10:32.496 { 00:10:32.496 "name": "BaseBdev3", 00:10:32.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.496 "is_configured": false, 00:10:32.496 "data_offset": 0, 00:10:32.496 "data_size": 0 00:10:32.496 } 00:10:32.496 ] 00:10:32.496 }' 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:32.496 03:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.072 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:33.072 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.072 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.072 [2024-10-09 03:13:16.152779] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:33.072 [2024-10-09 03:13:16.152945] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:33.072 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.072 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:33.072 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.072 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.072 [2024-10-09 03:13:16.164816] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:33.072 [2024-10-09 03:13:16.167223] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:33.072 [2024-10-09 03:13:16.167316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:33.072 [2024-10-09 03:13:16.167352] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:33.072 [2024-10-09 03:13:16.167376] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:33.072 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.072 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:33.072 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:33.072 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:33.072 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.072 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.072 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:33.072 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:33.072 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:33.072 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.072 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.072 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.072 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.072 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.072 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.072 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.072 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.072 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.072 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.072 "name": "Existed_Raid", 00:10:33.072 "uuid": "47c1ab5d-f845-486c-ac1e-006f2e4237bf", 00:10:33.072 "strip_size_kb": 0, 00:10:33.072 "state": "configuring", 00:10:33.072 "raid_level": "raid1", 00:10:33.072 "superblock": true, 00:10:33.072 "num_base_bdevs": 3, 00:10:33.072 "num_base_bdevs_discovered": 1, 00:10:33.072 "num_base_bdevs_operational": 3, 00:10:33.072 "base_bdevs_list": [ 00:10:33.072 { 00:10:33.072 "name": "BaseBdev1", 00:10:33.072 "uuid": "a164ea2c-2469-4731-a322-f338f5704c3c", 00:10:33.072 "is_configured": true, 00:10:33.072 "data_offset": 2048, 00:10:33.072 "data_size": 63488 00:10:33.072 }, 00:10:33.072 { 00:10:33.072 "name": "BaseBdev2", 00:10:33.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.072 "is_configured": false, 00:10:33.072 "data_offset": 0, 00:10:33.072 "data_size": 0 00:10:33.072 }, 00:10:33.072 { 00:10:33.072 "name": "BaseBdev3", 00:10:33.072 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.072 "is_configured": false, 00:10:33.072 "data_offset": 0, 00:10:33.072 "data_size": 0 00:10:33.072 } 00:10:33.072 ] 00:10:33.072 }' 00:10:33.072 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.072 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.332 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:33.332 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.332 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.591 [2024-10-09 03:13:16.676430] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:33.591 BaseBdev2 00:10:33.591 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.591 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:33.591 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:33.591 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:33.591 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:33.591 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:33.591 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:33.591 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:33.591 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.591 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.591 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.591 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:33.591 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.591 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.591 [ 00:10:33.591 { 00:10:33.591 "name": "BaseBdev2", 00:10:33.591 "aliases": [ 00:10:33.591 "436cae8b-5933-45f4-b3f3-75ff0e224750" 00:10:33.591 ], 00:10:33.591 "product_name": "Malloc disk", 00:10:33.591 "block_size": 512, 00:10:33.591 "num_blocks": 65536, 00:10:33.591 "uuid": "436cae8b-5933-45f4-b3f3-75ff0e224750", 00:10:33.591 "assigned_rate_limits": { 00:10:33.591 "rw_ios_per_sec": 0, 00:10:33.591 "rw_mbytes_per_sec": 0, 00:10:33.591 "r_mbytes_per_sec": 0, 00:10:33.591 "w_mbytes_per_sec": 0 00:10:33.591 }, 00:10:33.591 "claimed": true, 00:10:33.591 "claim_type": "exclusive_write", 00:10:33.591 "zoned": false, 00:10:33.591 "supported_io_types": { 00:10:33.591 "read": true, 00:10:33.591 "write": true, 00:10:33.591 "unmap": true, 00:10:33.591 "flush": true, 00:10:33.591 "reset": true, 00:10:33.591 "nvme_admin": false, 00:10:33.591 "nvme_io": false, 00:10:33.591 "nvme_io_md": false, 00:10:33.591 "write_zeroes": true, 00:10:33.591 "zcopy": true, 00:10:33.591 "get_zone_info": false, 00:10:33.591 "zone_management": false, 00:10:33.591 "zone_append": false, 00:10:33.591 "compare": false, 00:10:33.591 "compare_and_write": false, 00:10:33.591 "abort": true, 00:10:33.591 "seek_hole": false, 00:10:33.591 "seek_data": false, 00:10:33.591 "copy": true, 00:10:33.591 "nvme_iov_md": false 00:10:33.591 }, 00:10:33.591 "memory_domains": [ 00:10:33.591 { 00:10:33.591 "dma_device_id": "system", 00:10:33.591 "dma_device_type": 1 00:10:33.591 }, 00:10:33.591 { 00:10:33.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:33.591 "dma_device_type": 2 00:10:33.591 } 00:10:33.591 ], 00:10:33.591 "driver_specific": {} 00:10:33.591 } 00:10:33.591 ] 00:10:33.591 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.591 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:33.591 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:33.591 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:33.591 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:33.591 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:33.591 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:33.591 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:33.591 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:33.591 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:33.591 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:33.591 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:33.591 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:33.591 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:33.591 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:33.591 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:33.591 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:33.591 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:33.591 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:33.591 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:33.591 "name": "Existed_Raid", 00:10:33.591 "uuid": "47c1ab5d-f845-486c-ac1e-006f2e4237bf", 00:10:33.591 "strip_size_kb": 0, 00:10:33.591 "state": "configuring", 00:10:33.591 "raid_level": "raid1", 00:10:33.591 "superblock": true, 00:10:33.591 "num_base_bdevs": 3, 00:10:33.591 "num_base_bdevs_discovered": 2, 00:10:33.591 "num_base_bdevs_operational": 3, 00:10:33.591 "base_bdevs_list": [ 00:10:33.592 { 00:10:33.592 "name": "BaseBdev1", 00:10:33.592 "uuid": "a164ea2c-2469-4731-a322-f338f5704c3c", 00:10:33.592 "is_configured": true, 00:10:33.592 "data_offset": 2048, 00:10:33.592 "data_size": 63488 00:10:33.592 }, 00:10:33.592 { 00:10:33.592 "name": "BaseBdev2", 00:10:33.592 "uuid": "436cae8b-5933-45f4-b3f3-75ff0e224750", 00:10:33.592 "is_configured": true, 00:10:33.592 "data_offset": 2048, 00:10:33.592 "data_size": 63488 00:10:33.592 }, 00:10:33.592 { 00:10:33.592 "name": "BaseBdev3", 00:10:33.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:33.592 "is_configured": false, 00:10:33.592 "data_offset": 0, 00:10:33.592 "data_size": 0 00:10:33.592 } 00:10:33.592 ] 00:10:33.592 }' 00:10:33.592 03:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:33.592 03:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.159 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.160 [2024-10-09 03:13:17.236297] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:34.160 [2024-10-09 03:13:17.236668] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:34.160 [2024-10-09 03:13:17.236730] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:34.160 BaseBdev3 00:10:34.160 [2024-10-09 03:13:17.237268] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:34.160 [2024-10-09 03:13:17.237444] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:34.160 [2024-10-09 03:13:17.237506] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:34.160 [2024-10-09 03:13:17.237702] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.160 [ 00:10:34.160 { 00:10:34.160 "name": "BaseBdev3", 00:10:34.160 "aliases": [ 00:10:34.160 "48fe9897-4a57-4f72-b6ea-4b9bf054b8f6" 00:10:34.160 ], 00:10:34.160 "product_name": "Malloc disk", 00:10:34.160 "block_size": 512, 00:10:34.160 "num_blocks": 65536, 00:10:34.160 "uuid": "48fe9897-4a57-4f72-b6ea-4b9bf054b8f6", 00:10:34.160 "assigned_rate_limits": { 00:10:34.160 "rw_ios_per_sec": 0, 00:10:34.160 "rw_mbytes_per_sec": 0, 00:10:34.160 "r_mbytes_per_sec": 0, 00:10:34.160 "w_mbytes_per_sec": 0 00:10:34.160 }, 00:10:34.160 "claimed": true, 00:10:34.160 "claim_type": "exclusive_write", 00:10:34.160 "zoned": false, 00:10:34.160 "supported_io_types": { 00:10:34.160 "read": true, 00:10:34.160 "write": true, 00:10:34.160 "unmap": true, 00:10:34.160 "flush": true, 00:10:34.160 "reset": true, 00:10:34.160 "nvme_admin": false, 00:10:34.160 "nvme_io": false, 00:10:34.160 "nvme_io_md": false, 00:10:34.160 "write_zeroes": true, 00:10:34.160 "zcopy": true, 00:10:34.160 "get_zone_info": false, 00:10:34.160 "zone_management": false, 00:10:34.160 "zone_append": false, 00:10:34.160 "compare": false, 00:10:34.160 "compare_and_write": false, 00:10:34.160 "abort": true, 00:10:34.160 "seek_hole": false, 00:10:34.160 "seek_data": false, 00:10:34.160 "copy": true, 00:10:34.160 "nvme_iov_md": false 00:10:34.160 }, 00:10:34.160 "memory_domains": [ 00:10:34.160 { 00:10:34.160 "dma_device_id": "system", 00:10:34.160 "dma_device_type": 1 00:10:34.160 }, 00:10:34.160 { 00:10:34.160 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.160 "dma_device_type": 2 00:10:34.160 } 00:10:34.160 ], 00:10:34.160 "driver_specific": {} 00:10:34.160 } 00:10:34.160 ] 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.160 "name": "Existed_Raid", 00:10:34.160 "uuid": "47c1ab5d-f845-486c-ac1e-006f2e4237bf", 00:10:34.160 "strip_size_kb": 0, 00:10:34.160 "state": "online", 00:10:34.160 "raid_level": "raid1", 00:10:34.160 "superblock": true, 00:10:34.160 "num_base_bdevs": 3, 00:10:34.160 "num_base_bdevs_discovered": 3, 00:10:34.160 "num_base_bdevs_operational": 3, 00:10:34.160 "base_bdevs_list": [ 00:10:34.160 { 00:10:34.160 "name": "BaseBdev1", 00:10:34.160 "uuid": "a164ea2c-2469-4731-a322-f338f5704c3c", 00:10:34.160 "is_configured": true, 00:10:34.160 "data_offset": 2048, 00:10:34.160 "data_size": 63488 00:10:34.160 }, 00:10:34.160 { 00:10:34.160 "name": "BaseBdev2", 00:10:34.160 "uuid": "436cae8b-5933-45f4-b3f3-75ff0e224750", 00:10:34.160 "is_configured": true, 00:10:34.160 "data_offset": 2048, 00:10:34.160 "data_size": 63488 00:10:34.160 }, 00:10:34.160 { 00:10:34.160 "name": "BaseBdev3", 00:10:34.160 "uuid": "48fe9897-4a57-4f72-b6ea-4b9bf054b8f6", 00:10:34.160 "is_configured": true, 00:10:34.160 "data_offset": 2048, 00:10:34.160 "data_size": 63488 00:10:34.160 } 00:10:34.160 ] 00:10:34.160 }' 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.160 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.427 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:34.427 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:34.427 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:34.427 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:34.427 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:34.427 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:34.427 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:34.427 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:34.427 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.427 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.427 [2024-10-09 03:13:17.719810] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:34.703 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.703 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:34.703 "name": "Existed_Raid", 00:10:34.703 "aliases": [ 00:10:34.703 "47c1ab5d-f845-486c-ac1e-006f2e4237bf" 00:10:34.703 ], 00:10:34.703 "product_name": "Raid Volume", 00:10:34.703 "block_size": 512, 00:10:34.703 "num_blocks": 63488, 00:10:34.703 "uuid": "47c1ab5d-f845-486c-ac1e-006f2e4237bf", 00:10:34.703 "assigned_rate_limits": { 00:10:34.703 "rw_ios_per_sec": 0, 00:10:34.703 "rw_mbytes_per_sec": 0, 00:10:34.703 "r_mbytes_per_sec": 0, 00:10:34.703 "w_mbytes_per_sec": 0 00:10:34.703 }, 00:10:34.703 "claimed": false, 00:10:34.703 "zoned": false, 00:10:34.703 "supported_io_types": { 00:10:34.703 "read": true, 00:10:34.703 "write": true, 00:10:34.703 "unmap": false, 00:10:34.703 "flush": false, 00:10:34.703 "reset": true, 00:10:34.703 "nvme_admin": false, 00:10:34.703 "nvme_io": false, 00:10:34.703 "nvme_io_md": false, 00:10:34.703 "write_zeroes": true, 00:10:34.703 "zcopy": false, 00:10:34.703 "get_zone_info": false, 00:10:34.703 "zone_management": false, 00:10:34.703 "zone_append": false, 00:10:34.703 "compare": false, 00:10:34.703 "compare_and_write": false, 00:10:34.703 "abort": false, 00:10:34.703 "seek_hole": false, 00:10:34.703 "seek_data": false, 00:10:34.703 "copy": false, 00:10:34.703 "nvme_iov_md": false 00:10:34.703 }, 00:10:34.703 "memory_domains": [ 00:10:34.703 { 00:10:34.703 "dma_device_id": "system", 00:10:34.703 "dma_device_type": 1 00:10:34.703 }, 00:10:34.703 { 00:10:34.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.703 "dma_device_type": 2 00:10:34.703 }, 00:10:34.703 { 00:10:34.703 "dma_device_id": "system", 00:10:34.703 "dma_device_type": 1 00:10:34.703 }, 00:10:34.703 { 00:10:34.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.703 "dma_device_type": 2 00:10:34.703 }, 00:10:34.703 { 00:10:34.703 "dma_device_id": "system", 00:10:34.703 "dma_device_type": 1 00:10:34.703 }, 00:10:34.703 { 00:10:34.703 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.703 "dma_device_type": 2 00:10:34.703 } 00:10:34.703 ], 00:10:34.703 "driver_specific": { 00:10:34.703 "raid": { 00:10:34.703 "uuid": "47c1ab5d-f845-486c-ac1e-006f2e4237bf", 00:10:34.703 "strip_size_kb": 0, 00:10:34.703 "state": "online", 00:10:34.703 "raid_level": "raid1", 00:10:34.703 "superblock": true, 00:10:34.703 "num_base_bdevs": 3, 00:10:34.703 "num_base_bdevs_discovered": 3, 00:10:34.703 "num_base_bdevs_operational": 3, 00:10:34.703 "base_bdevs_list": [ 00:10:34.703 { 00:10:34.703 "name": "BaseBdev1", 00:10:34.703 "uuid": "a164ea2c-2469-4731-a322-f338f5704c3c", 00:10:34.703 "is_configured": true, 00:10:34.703 "data_offset": 2048, 00:10:34.703 "data_size": 63488 00:10:34.703 }, 00:10:34.703 { 00:10:34.703 "name": "BaseBdev2", 00:10:34.703 "uuid": "436cae8b-5933-45f4-b3f3-75ff0e224750", 00:10:34.703 "is_configured": true, 00:10:34.703 "data_offset": 2048, 00:10:34.703 "data_size": 63488 00:10:34.703 }, 00:10:34.703 { 00:10:34.703 "name": "BaseBdev3", 00:10:34.703 "uuid": "48fe9897-4a57-4f72-b6ea-4b9bf054b8f6", 00:10:34.703 "is_configured": true, 00:10:34.703 "data_offset": 2048, 00:10:34.703 "data_size": 63488 00:10:34.703 } 00:10:34.703 ] 00:10:34.703 } 00:10:34.703 } 00:10:34.703 }' 00:10:34.703 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:34.703 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:34.703 BaseBdev2 00:10:34.703 BaseBdev3' 00:10:34.703 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.703 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:34.703 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.703 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:34.703 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.703 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.703 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.703 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.703 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.703 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.703 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.703 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.703 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:34.703 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.703 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.703 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.703 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.703 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.703 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:34.703 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:34.703 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:34.703 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.703 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.703 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.703 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:34.703 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:34.703 03:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:34.703 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.703 03:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.703 [2024-10-09 03:13:17.991051] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:34.964 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.964 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:34.964 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:10:34.964 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:34.964 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:10:34.964 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:10:34.964 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:10:34.964 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.964 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:34.964 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:34.964 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:34.964 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:34.964 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.964 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.964 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.964 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.964 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.964 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.964 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:34.964 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:34.964 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:34.964 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.964 "name": "Existed_Raid", 00:10:34.964 "uuid": "47c1ab5d-f845-486c-ac1e-006f2e4237bf", 00:10:34.964 "strip_size_kb": 0, 00:10:34.964 "state": "online", 00:10:34.964 "raid_level": "raid1", 00:10:34.964 "superblock": true, 00:10:34.964 "num_base_bdevs": 3, 00:10:34.964 "num_base_bdevs_discovered": 2, 00:10:34.964 "num_base_bdevs_operational": 2, 00:10:34.964 "base_bdevs_list": [ 00:10:34.964 { 00:10:34.964 "name": null, 00:10:34.964 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.964 "is_configured": false, 00:10:34.964 "data_offset": 0, 00:10:34.964 "data_size": 63488 00:10:34.964 }, 00:10:34.964 { 00:10:34.964 "name": "BaseBdev2", 00:10:34.964 "uuid": "436cae8b-5933-45f4-b3f3-75ff0e224750", 00:10:34.964 "is_configured": true, 00:10:34.964 "data_offset": 2048, 00:10:34.964 "data_size": 63488 00:10:34.964 }, 00:10:34.964 { 00:10:34.964 "name": "BaseBdev3", 00:10:34.964 "uuid": "48fe9897-4a57-4f72-b6ea-4b9bf054b8f6", 00:10:34.964 "is_configured": true, 00:10:34.964 "data_offset": 2048, 00:10:34.964 "data_size": 63488 00:10:34.964 } 00:10:34.964 ] 00:10:34.964 }' 00:10:34.964 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.964 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.223 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:35.223 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:35.223 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.223 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:35.223 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.223 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.483 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.483 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:35.483 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:35.483 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:35.483 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.483 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.483 [2024-10-09 03:13:18.538729] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:35.483 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.483 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:35.483 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:35.483 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.483 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:35.483 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.483 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.483 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.483 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:35.483 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:35.483 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:35.483 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.483 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.483 [2024-10-09 03:13:18.699857] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:35.483 [2024-10-09 03:13:18.700068] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:35.744 [2024-10-09 03:13:18.805952] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:35.744 [2024-10-09 03:13:18.806080] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:35.744 [2024-10-09 03:13:18.806124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.744 BaseBdev2 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.744 [ 00:10:35.744 { 00:10:35.744 "name": "BaseBdev2", 00:10:35.744 "aliases": [ 00:10:35.744 "9cdb805f-d94e-4f56-9bf5-03d912699f98" 00:10:35.744 ], 00:10:35.744 "product_name": "Malloc disk", 00:10:35.744 "block_size": 512, 00:10:35.744 "num_blocks": 65536, 00:10:35.744 "uuid": "9cdb805f-d94e-4f56-9bf5-03d912699f98", 00:10:35.744 "assigned_rate_limits": { 00:10:35.744 "rw_ios_per_sec": 0, 00:10:35.744 "rw_mbytes_per_sec": 0, 00:10:35.744 "r_mbytes_per_sec": 0, 00:10:35.744 "w_mbytes_per_sec": 0 00:10:35.744 }, 00:10:35.744 "claimed": false, 00:10:35.744 "zoned": false, 00:10:35.744 "supported_io_types": { 00:10:35.744 "read": true, 00:10:35.744 "write": true, 00:10:35.744 "unmap": true, 00:10:35.744 "flush": true, 00:10:35.744 "reset": true, 00:10:35.744 "nvme_admin": false, 00:10:35.744 "nvme_io": false, 00:10:35.744 "nvme_io_md": false, 00:10:35.744 "write_zeroes": true, 00:10:35.744 "zcopy": true, 00:10:35.744 "get_zone_info": false, 00:10:35.744 "zone_management": false, 00:10:35.744 "zone_append": false, 00:10:35.744 "compare": false, 00:10:35.744 "compare_and_write": false, 00:10:35.744 "abort": true, 00:10:35.744 "seek_hole": false, 00:10:35.744 "seek_data": false, 00:10:35.744 "copy": true, 00:10:35.744 "nvme_iov_md": false 00:10:35.744 }, 00:10:35.744 "memory_domains": [ 00:10:35.744 { 00:10:35.744 "dma_device_id": "system", 00:10:35.744 "dma_device_type": 1 00:10:35.744 }, 00:10:35.744 { 00:10:35.744 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.744 "dma_device_type": 2 00:10:35.744 } 00:10:35.744 ], 00:10:35.744 "driver_specific": {} 00:10:35.744 } 00:10:35.744 ] 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.744 BaseBdev3 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:35.744 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:35.745 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:35.745 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:35.745 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.745 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.745 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.745 03:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:35.745 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.745 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.745 [ 00:10:35.745 { 00:10:35.745 "name": "BaseBdev3", 00:10:35.745 "aliases": [ 00:10:35.745 "54653778-4a23-4320-8fdf-d0636bff6f2e" 00:10:35.745 ], 00:10:35.745 "product_name": "Malloc disk", 00:10:35.745 "block_size": 512, 00:10:35.745 "num_blocks": 65536, 00:10:35.745 "uuid": "54653778-4a23-4320-8fdf-d0636bff6f2e", 00:10:35.745 "assigned_rate_limits": { 00:10:35.745 "rw_ios_per_sec": 0, 00:10:35.745 "rw_mbytes_per_sec": 0, 00:10:35.745 "r_mbytes_per_sec": 0, 00:10:35.745 "w_mbytes_per_sec": 0 00:10:35.745 }, 00:10:35.745 "claimed": false, 00:10:35.745 "zoned": false, 00:10:35.745 "supported_io_types": { 00:10:35.745 "read": true, 00:10:35.745 "write": true, 00:10:35.745 "unmap": true, 00:10:35.745 "flush": true, 00:10:35.745 "reset": true, 00:10:35.745 "nvme_admin": false, 00:10:35.745 "nvme_io": false, 00:10:35.745 "nvme_io_md": false, 00:10:35.745 "write_zeroes": true, 00:10:35.745 "zcopy": true, 00:10:35.745 "get_zone_info": false, 00:10:35.745 "zone_management": false, 00:10:35.745 "zone_append": false, 00:10:35.745 "compare": false, 00:10:35.745 "compare_and_write": false, 00:10:35.745 "abort": true, 00:10:35.745 "seek_hole": false, 00:10:35.745 "seek_data": false, 00:10:35.745 "copy": true, 00:10:35.745 "nvme_iov_md": false 00:10:35.745 }, 00:10:35.745 "memory_domains": [ 00:10:35.745 { 00:10:35.745 "dma_device_id": "system", 00:10:35.745 "dma_device_type": 1 00:10:35.745 }, 00:10:35.745 { 00:10:35.745 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.745 "dma_device_type": 2 00:10:35.745 } 00:10:35.745 ], 00:10:35.745 "driver_specific": {} 00:10:35.745 } 00:10:35.745 ] 00:10:35.745 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.745 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:35.745 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:35.745 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:35.745 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:35.745 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.745 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:35.745 [2024-10-09 03:13:19.031600] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:35.745 [2024-10-09 03:13:19.031714] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:35.745 [2024-10-09 03:13:19.031757] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:35.745 [2024-10-09 03:13:19.033961] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:35.745 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:35.745 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:35.745 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.745 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.745 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:35.745 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:35.745 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:35.745 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.745 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.745 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.745 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.745 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.745 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.745 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:35.745 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.006 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.006 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.006 "name": "Existed_Raid", 00:10:36.006 "uuid": "c8abf135-77b0-4d15-96cf-8036b76d2353", 00:10:36.006 "strip_size_kb": 0, 00:10:36.006 "state": "configuring", 00:10:36.006 "raid_level": "raid1", 00:10:36.006 "superblock": true, 00:10:36.006 "num_base_bdevs": 3, 00:10:36.006 "num_base_bdevs_discovered": 2, 00:10:36.006 "num_base_bdevs_operational": 3, 00:10:36.006 "base_bdevs_list": [ 00:10:36.006 { 00:10:36.006 "name": "BaseBdev1", 00:10:36.006 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.006 "is_configured": false, 00:10:36.006 "data_offset": 0, 00:10:36.006 "data_size": 0 00:10:36.006 }, 00:10:36.006 { 00:10:36.006 "name": "BaseBdev2", 00:10:36.006 "uuid": "9cdb805f-d94e-4f56-9bf5-03d912699f98", 00:10:36.006 "is_configured": true, 00:10:36.006 "data_offset": 2048, 00:10:36.006 "data_size": 63488 00:10:36.006 }, 00:10:36.006 { 00:10:36.006 "name": "BaseBdev3", 00:10:36.006 "uuid": "54653778-4a23-4320-8fdf-d0636bff6f2e", 00:10:36.006 "is_configured": true, 00:10:36.006 "data_offset": 2048, 00:10:36.006 "data_size": 63488 00:10:36.006 } 00:10:36.006 ] 00:10:36.006 }' 00:10:36.006 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.006 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.266 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:36.266 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.266 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.266 [2024-10-09 03:13:19.450967] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:36.266 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.266 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:36.266 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.266 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.266 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.266 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.266 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.266 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.266 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.266 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.266 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.266 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.266 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.266 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.266 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.266 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.266 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.266 "name": "Existed_Raid", 00:10:36.266 "uuid": "c8abf135-77b0-4d15-96cf-8036b76d2353", 00:10:36.266 "strip_size_kb": 0, 00:10:36.266 "state": "configuring", 00:10:36.266 "raid_level": "raid1", 00:10:36.266 "superblock": true, 00:10:36.266 "num_base_bdevs": 3, 00:10:36.266 "num_base_bdevs_discovered": 1, 00:10:36.266 "num_base_bdevs_operational": 3, 00:10:36.266 "base_bdevs_list": [ 00:10:36.266 { 00:10:36.266 "name": "BaseBdev1", 00:10:36.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.266 "is_configured": false, 00:10:36.266 "data_offset": 0, 00:10:36.266 "data_size": 0 00:10:36.266 }, 00:10:36.266 { 00:10:36.266 "name": null, 00:10:36.266 "uuid": "9cdb805f-d94e-4f56-9bf5-03d912699f98", 00:10:36.266 "is_configured": false, 00:10:36.266 "data_offset": 0, 00:10:36.266 "data_size": 63488 00:10:36.266 }, 00:10:36.266 { 00:10:36.266 "name": "BaseBdev3", 00:10:36.266 "uuid": "54653778-4a23-4320-8fdf-d0636bff6f2e", 00:10:36.266 "is_configured": true, 00:10:36.266 "data_offset": 2048, 00:10:36.266 "data_size": 63488 00:10:36.266 } 00:10:36.266 ] 00:10:36.266 }' 00:10:36.266 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.266 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.836 [2024-10-09 03:13:19.948464] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:36.836 BaseBdev1 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.836 [ 00:10:36.836 { 00:10:36.836 "name": "BaseBdev1", 00:10:36.836 "aliases": [ 00:10:36.836 "4774b70b-9d85-464b-ba7d-3fb19e1fe2fe" 00:10:36.836 ], 00:10:36.836 "product_name": "Malloc disk", 00:10:36.836 "block_size": 512, 00:10:36.836 "num_blocks": 65536, 00:10:36.836 "uuid": "4774b70b-9d85-464b-ba7d-3fb19e1fe2fe", 00:10:36.836 "assigned_rate_limits": { 00:10:36.836 "rw_ios_per_sec": 0, 00:10:36.836 "rw_mbytes_per_sec": 0, 00:10:36.836 "r_mbytes_per_sec": 0, 00:10:36.836 "w_mbytes_per_sec": 0 00:10:36.836 }, 00:10:36.836 "claimed": true, 00:10:36.836 "claim_type": "exclusive_write", 00:10:36.836 "zoned": false, 00:10:36.836 "supported_io_types": { 00:10:36.836 "read": true, 00:10:36.836 "write": true, 00:10:36.836 "unmap": true, 00:10:36.836 "flush": true, 00:10:36.836 "reset": true, 00:10:36.836 "nvme_admin": false, 00:10:36.836 "nvme_io": false, 00:10:36.836 "nvme_io_md": false, 00:10:36.836 "write_zeroes": true, 00:10:36.836 "zcopy": true, 00:10:36.836 "get_zone_info": false, 00:10:36.836 "zone_management": false, 00:10:36.836 "zone_append": false, 00:10:36.836 "compare": false, 00:10:36.836 "compare_and_write": false, 00:10:36.836 "abort": true, 00:10:36.836 "seek_hole": false, 00:10:36.836 "seek_data": false, 00:10:36.836 "copy": true, 00:10:36.836 "nvme_iov_md": false 00:10:36.836 }, 00:10:36.836 "memory_domains": [ 00:10:36.836 { 00:10:36.836 "dma_device_id": "system", 00:10:36.836 "dma_device_type": 1 00:10:36.836 }, 00:10:36.836 { 00:10:36.836 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.836 "dma_device_type": 2 00:10:36.836 } 00:10:36.836 ], 00:10:36.836 "driver_specific": {} 00:10:36.836 } 00:10:36.836 ] 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:36.836 03:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:36.836 03:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:36.837 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.837 "name": "Existed_Raid", 00:10:36.837 "uuid": "c8abf135-77b0-4d15-96cf-8036b76d2353", 00:10:36.837 "strip_size_kb": 0, 00:10:36.837 "state": "configuring", 00:10:36.837 "raid_level": "raid1", 00:10:36.837 "superblock": true, 00:10:36.837 "num_base_bdevs": 3, 00:10:36.837 "num_base_bdevs_discovered": 2, 00:10:36.837 "num_base_bdevs_operational": 3, 00:10:36.837 "base_bdevs_list": [ 00:10:36.837 { 00:10:36.837 "name": "BaseBdev1", 00:10:36.837 "uuid": "4774b70b-9d85-464b-ba7d-3fb19e1fe2fe", 00:10:36.837 "is_configured": true, 00:10:36.837 "data_offset": 2048, 00:10:36.837 "data_size": 63488 00:10:36.837 }, 00:10:36.837 { 00:10:36.837 "name": null, 00:10:36.837 "uuid": "9cdb805f-d94e-4f56-9bf5-03d912699f98", 00:10:36.837 "is_configured": false, 00:10:36.837 "data_offset": 0, 00:10:36.837 "data_size": 63488 00:10:36.837 }, 00:10:36.837 { 00:10:36.837 "name": "BaseBdev3", 00:10:36.837 "uuid": "54653778-4a23-4320-8fdf-d0636bff6f2e", 00:10:36.837 "is_configured": true, 00:10:36.837 "data_offset": 2048, 00:10:36.837 "data_size": 63488 00:10:36.837 } 00:10:36.837 ] 00:10:36.837 }' 00:10:36.837 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.837 03:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.405 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.405 03:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.405 03:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.405 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:37.405 03:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.405 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:37.405 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:37.405 03:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.405 03:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.405 [2024-10-09 03:13:20.491585] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:37.405 03:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.405 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:37.405 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.405 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.405 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:37.405 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:37.405 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:37.405 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.405 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.405 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.405 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.405 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.405 03:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.405 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.405 03:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.405 03:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.405 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.405 "name": "Existed_Raid", 00:10:37.405 "uuid": "c8abf135-77b0-4d15-96cf-8036b76d2353", 00:10:37.405 "strip_size_kb": 0, 00:10:37.405 "state": "configuring", 00:10:37.405 "raid_level": "raid1", 00:10:37.405 "superblock": true, 00:10:37.405 "num_base_bdevs": 3, 00:10:37.405 "num_base_bdevs_discovered": 1, 00:10:37.405 "num_base_bdevs_operational": 3, 00:10:37.405 "base_bdevs_list": [ 00:10:37.405 { 00:10:37.405 "name": "BaseBdev1", 00:10:37.405 "uuid": "4774b70b-9d85-464b-ba7d-3fb19e1fe2fe", 00:10:37.405 "is_configured": true, 00:10:37.405 "data_offset": 2048, 00:10:37.405 "data_size": 63488 00:10:37.405 }, 00:10:37.405 { 00:10:37.405 "name": null, 00:10:37.405 "uuid": "9cdb805f-d94e-4f56-9bf5-03d912699f98", 00:10:37.405 "is_configured": false, 00:10:37.405 "data_offset": 0, 00:10:37.405 "data_size": 63488 00:10:37.405 }, 00:10:37.405 { 00:10:37.405 "name": null, 00:10:37.405 "uuid": "54653778-4a23-4320-8fdf-d0636bff6f2e", 00:10:37.405 "is_configured": false, 00:10:37.405 "data_offset": 0, 00:10:37.405 "data_size": 63488 00:10:37.405 } 00:10:37.405 ] 00:10:37.405 }' 00:10:37.405 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.405 03:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.664 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:37.664 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.664 03:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.664 03:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.664 03:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.922 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:37.922 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:37.922 03:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.922 03:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.922 [2024-10-09 03:13:20.986816] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:37.922 03:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.922 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:37.922 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.922 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:37.922 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:37.922 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:37.922 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:37.922 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.922 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.922 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.922 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.922 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.922 03:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.922 03:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:37.922 03:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:37.922 03:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:37.922 03:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.922 "name": "Existed_Raid", 00:10:37.922 "uuid": "c8abf135-77b0-4d15-96cf-8036b76d2353", 00:10:37.922 "strip_size_kb": 0, 00:10:37.922 "state": "configuring", 00:10:37.922 "raid_level": "raid1", 00:10:37.922 "superblock": true, 00:10:37.922 "num_base_bdevs": 3, 00:10:37.923 "num_base_bdevs_discovered": 2, 00:10:37.923 "num_base_bdevs_operational": 3, 00:10:37.923 "base_bdevs_list": [ 00:10:37.923 { 00:10:37.923 "name": "BaseBdev1", 00:10:37.923 "uuid": "4774b70b-9d85-464b-ba7d-3fb19e1fe2fe", 00:10:37.923 "is_configured": true, 00:10:37.923 "data_offset": 2048, 00:10:37.923 "data_size": 63488 00:10:37.923 }, 00:10:37.923 { 00:10:37.923 "name": null, 00:10:37.923 "uuid": "9cdb805f-d94e-4f56-9bf5-03d912699f98", 00:10:37.923 "is_configured": false, 00:10:37.923 "data_offset": 0, 00:10:37.923 "data_size": 63488 00:10:37.923 }, 00:10:37.923 { 00:10:37.923 "name": "BaseBdev3", 00:10:37.923 "uuid": "54653778-4a23-4320-8fdf-d0636bff6f2e", 00:10:37.923 "is_configured": true, 00:10:37.923 "data_offset": 2048, 00:10:37.923 "data_size": 63488 00:10:37.923 } 00:10:37.923 ] 00:10:37.923 }' 00:10:37.923 03:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.923 03:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.181 03:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.181 03:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:38.181 03:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.181 03:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.181 03:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.441 03:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:38.441 03:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:38.441 03:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.441 03:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.441 [2024-10-09 03:13:21.501968] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:38.441 03:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.441 03:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:38.441 03:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.441 03:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.441 03:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:38.441 03:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:38.441 03:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.441 03:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.441 03:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.441 03:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.441 03:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.441 03:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.441 03:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.441 03:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:38.441 03:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:38.441 03:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:38.441 03:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.441 "name": "Existed_Raid", 00:10:38.441 "uuid": "c8abf135-77b0-4d15-96cf-8036b76d2353", 00:10:38.441 "strip_size_kb": 0, 00:10:38.441 "state": "configuring", 00:10:38.441 "raid_level": "raid1", 00:10:38.441 "superblock": true, 00:10:38.441 "num_base_bdevs": 3, 00:10:38.441 "num_base_bdevs_discovered": 1, 00:10:38.441 "num_base_bdevs_operational": 3, 00:10:38.441 "base_bdevs_list": [ 00:10:38.441 { 00:10:38.441 "name": null, 00:10:38.441 "uuid": "4774b70b-9d85-464b-ba7d-3fb19e1fe2fe", 00:10:38.441 "is_configured": false, 00:10:38.441 "data_offset": 0, 00:10:38.441 "data_size": 63488 00:10:38.441 }, 00:10:38.441 { 00:10:38.441 "name": null, 00:10:38.441 "uuid": "9cdb805f-d94e-4f56-9bf5-03d912699f98", 00:10:38.441 "is_configured": false, 00:10:38.441 "data_offset": 0, 00:10:38.441 "data_size": 63488 00:10:38.441 }, 00:10:38.441 { 00:10:38.441 "name": "BaseBdev3", 00:10:38.441 "uuid": "54653778-4a23-4320-8fdf-d0636bff6f2e", 00:10:38.441 "is_configured": true, 00:10:38.441 "data_offset": 2048, 00:10:38.441 "data_size": 63488 00:10:38.441 } 00:10:38.441 ] 00:10:38.441 }' 00:10:38.441 03:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.441 03:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.010 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:39.010 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.010 03:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.010 03:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.010 03:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.010 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:39.010 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:39.010 03:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.010 03:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.010 [2024-10-09 03:13:22.105028] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:39.010 03:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.010 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:10:39.010 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.010 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.010 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:39.010 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:39.010 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:39.010 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.010 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.010 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.010 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.010 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.010 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.010 03:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.010 03:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.010 03:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.010 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.010 "name": "Existed_Raid", 00:10:39.010 "uuid": "c8abf135-77b0-4d15-96cf-8036b76d2353", 00:10:39.010 "strip_size_kb": 0, 00:10:39.010 "state": "configuring", 00:10:39.010 "raid_level": "raid1", 00:10:39.010 "superblock": true, 00:10:39.010 "num_base_bdevs": 3, 00:10:39.010 "num_base_bdevs_discovered": 2, 00:10:39.010 "num_base_bdevs_operational": 3, 00:10:39.010 "base_bdevs_list": [ 00:10:39.010 { 00:10:39.010 "name": null, 00:10:39.010 "uuid": "4774b70b-9d85-464b-ba7d-3fb19e1fe2fe", 00:10:39.010 "is_configured": false, 00:10:39.010 "data_offset": 0, 00:10:39.010 "data_size": 63488 00:10:39.010 }, 00:10:39.010 { 00:10:39.010 "name": "BaseBdev2", 00:10:39.010 "uuid": "9cdb805f-d94e-4f56-9bf5-03d912699f98", 00:10:39.010 "is_configured": true, 00:10:39.010 "data_offset": 2048, 00:10:39.010 "data_size": 63488 00:10:39.010 }, 00:10:39.010 { 00:10:39.010 "name": "BaseBdev3", 00:10:39.010 "uuid": "54653778-4a23-4320-8fdf-d0636bff6f2e", 00:10:39.010 "is_configured": true, 00:10:39.010 "data_offset": 2048, 00:10:39.010 "data_size": 63488 00:10:39.010 } 00:10:39.010 ] 00:10:39.010 }' 00:10:39.010 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.011 03:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.580 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:39.580 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.580 03:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.580 03:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.580 03:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.580 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:39.580 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.580 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:39.580 03:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.580 03:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.580 03:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.580 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4774b70b-9d85-464b-ba7d-3fb19e1fe2fe 00:10:39.580 03:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.580 03:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.580 [2024-10-09 03:13:22.728465] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:39.580 [2024-10-09 03:13:22.728794] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:39.580 [2024-10-09 03:13:22.728858] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:39.580 [2024-10-09 03:13:22.729222] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:39.580 [2024-10-09 03:13:22.729463] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:39.580 NewBaseBdev 00:10:39.580 [2024-10-09 03:13:22.729525] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:39.580 [2024-10-09 03:13:22.729744] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:39.580 03:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.580 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:39.580 03:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:10:39.580 03:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:10:39.580 03:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:10:39.580 03:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:10:39.580 03:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:10:39.580 03:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:10:39.580 03:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.580 03:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.580 03:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.580 03:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:39.580 03:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.580 03:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.580 [ 00:10:39.580 { 00:10:39.580 "name": "NewBaseBdev", 00:10:39.580 "aliases": [ 00:10:39.580 "4774b70b-9d85-464b-ba7d-3fb19e1fe2fe" 00:10:39.580 ], 00:10:39.580 "product_name": "Malloc disk", 00:10:39.580 "block_size": 512, 00:10:39.580 "num_blocks": 65536, 00:10:39.580 "uuid": "4774b70b-9d85-464b-ba7d-3fb19e1fe2fe", 00:10:39.580 "assigned_rate_limits": { 00:10:39.580 "rw_ios_per_sec": 0, 00:10:39.580 "rw_mbytes_per_sec": 0, 00:10:39.580 "r_mbytes_per_sec": 0, 00:10:39.580 "w_mbytes_per_sec": 0 00:10:39.580 }, 00:10:39.580 "claimed": true, 00:10:39.580 "claim_type": "exclusive_write", 00:10:39.580 "zoned": false, 00:10:39.580 "supported_io_types": { 00:10:39.580 "read": true, 00:10:39.580 "write": true, 00:10:39.580 "unmap": true, 00:10:39.580 "flush": true, 00:10:39.580 "reset": true, 00:10:39.580 "nvme_admin": false, 00:10:39.580 "nvme_io": false, 00:10:39.580 "nvme_io_md": false, 00:10:39.580 "write_zeroes": true, 00:10:39.580 "zcopy": true, 00:10:39.580 "get_zone_info": false, 00:10:39.580 "zone_management": false, 00:10:39.580 "zone_append": false, 00:10:39.580 "compare": false, 00:10:39.580 "compare_and_write": false, 00:10:39.580 "abort": true, 00:10:39.580 "seek_hole": false, 00:10:39.580 "seek_data": false, 00:10:39.580 "copy": true, 00:10:39.580 "nvme_iov_md": false 00:10:39.580 }, 00:10:39.580 "memory_domains": [ 00:10:39.580 { 00:10:39.580 "dma_device_id": "system", 00:10:39.580 "dma_device_type": 1 00:10:39.580 }, 00:10:39.580 { 00:10:39.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.580 "dma_device_type": 2 00:10:39.580 } 00:10:39.580 ], 00:10:39.580 "driver_specific": {} 00:10:39.580 } 00:10:39.580 ] 00:10:39.580 03:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.580 03:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:10:39.580 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:10:39.580 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.580 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:39.580 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:39.580 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:39.580 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:39.581 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.581 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.581 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.581 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.581 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.581 03:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:39.581 03:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:39.581 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.581 03:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:39.581 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.581 "name": "Existed_Raid", 00:10:39.581 "uuid": "c8abf135-77b0-4d15-96cf-8036b76d2353", 00:10:39.581 "strip_size_kb": 0, 00:10:39.581 "state": "online", 00:10:39.581 "raid_level": "raid1", 00:10:39.581 "superblock": true, 00:10:39.581 "num_base_bdevs": 3, 00:10:39.581 "num_base_bdevs_discovered": 3, 00:10:39.581 "num_base_bdevs_operational": 3, 00:10:39.581 "base_bdevs_list": [ 00:10:39.581 { 00:10:39.581 "name": "NewBaseBdev", 00:10:39.581 "uuid": "4774b70b-9d85-464b-ba7d-3fb19e1fe2fe", 00:10:39.581 "is_configured": true, 00:10:39.581 "data_offset": 2048, 00:10:39.581 "data_size": 63488 00:10:39.581 }, 00:10:39.581 { 00:10:39.581 "name": "BaseBdev2", 00:10:39.581 "uuid": "9cdb805f-d94e-4f56-9bf5-03d912699f98", 00:10:39.581 "is_configured": true, 00:10:39.581 "data_offset": 2048, 00:10:39.581 "data_size": 63488 00:10:39.581 }, 00:10:39.581 { 00:10:39.581 "name": "BaseBdev3", 00:10:39.581 "uuid": "54653778-4a23-4320-8fdf-d0636bff6f2e", 00:10:39.581 "is_configured": true, 00:10:39.581 "data_offset": 2048, 00:10:39.581 "data_size": 63488 00:10:39.581 } 00:10:39.581 ] 00:10:39.581 }' 00:10:39.581 03:13:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.581 03:13:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.150 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:40.150 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:40.150 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:40.150 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:40.150 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:40.150 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.151 [2024-10-09 03:13:23.176203] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:40.151 "name": "Existed_Raid", 00:10:40.151 "aliases": [ 00:10:40.151 "c8abf135-77b0-4d15-96cf-8036b76d2353" 00:10:40.151 ], 00:10:40.151 "product_name": "Raid Volume", 00:10:40.151 "block_size": 512, 00:10:40.151 "num_blocks": 63488, 00:10:40.151 "uuid": "c8abf135-77b0-4d15-96cf-8036b76d2353", 00:10:40.151 "assigned_rate_limits": { 00:10:40.151 "rw_ios_per_sec": 0, 00:10:40.151 "rw_mbytes_per_sec": 0, 00:10:40.151 "r_mbytes_per_sec": 0, 00:10:40.151 "w_mbytes_per_sec": 0 00:10:40.151 }, 00:10:40.151 "claimed": false, 00:10:40.151 "zoned": false, 00:10:40.151 "supported_io_types": { 00:10:40.151 "read": true, 00:10:40.151 "write": true, 00:10:40.151 "unmap": false, 00:10:40.151 "flush": false, 00:10:40.151 "reset": true, 00:10:40.151 "nvme_admin": false, 00:10:40.151 "nvme_io": false, 00:10:40.151 "nvme_io_md": false, 00:10:40.151 "write_zeroes": true, 00:10:40.151 "zcopy": false, 00:10:40.151 "get_zone_info": false, 00:10:40.151 "zone_management": false, 00:10:40.151 "zone_append": false, 00:10:40.151 "compare": false, 00:10:40.151 "compare_and_write": false, 00:10:40.151 "abort": false, 00:10:40.151 "seek_hole": false, 00:10:40.151 "seek_data": false, 00:10:40.151 "copy": false, 00:10:40.151 "nvme_iov_md": false 00:10:40.151 }, 00:10:40.151 "memory_domains": [ 00:10:40.151 { 00:10:40.151 "dma_device_id": "system", 00:10:40.151 "dma_device_type": 1 00:10:40.151 }, 00:10:40.151 { 00:10:40.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.151 "dma_device_type": 2 00:10:40.151 }, 00:10:40.151 { 00:10:40.151 "dma_device_id": "system", 00:10:40.151 "dma_device_type": 1 00:10:40.151 }, 00:10:40.151 { 00:10:40.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.151 "dma_device_type": 2 00:10:40.151 }, 00:10:40.151 { 00:10:40.151 "dma_device_id": "system", 00:10:40.151 "dma_device_type": 1 00:10:40.151 }, 00:10:40.151 { 00:10:40.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:40.151 "dma_device_type": 2 00:10:40.151 } 00:10:40.151 ], 00:10:40.151 "driver_specific": { 00:10:40.151 "raid": { 00:10:40.151 "uuid": "c8abf135-77b0-4d15-96cf-8036b76d2353", 00:10:40.151 "strip_size_kb": 0, 00:10:40.151 "state": "online", 00:10:40.151 "raid_level": "raid1", 00:10:40.151 "superblock": true, 00:10:40.151 "num_base_bdevs": 3, 00:10:40.151 "num_base_bdevs_discovered": 3, 00:10:40.151 "num_base_bdevs_operational": 3, 00:10:40.151 "base_bdevs_list": [ 00:10:40.151 { 00:10:40.151 "name": "NewBaseBdev", 00:10:40.151 "uuid": "4774b70b-9d85-464b-ba7d-3fb19e1fe2fe", 00:10:40.151 "is_configured": true, 00:10:40.151 "data_offset": 2048, 00:10:40.151 "data_size": 63488 00:10:40.151 }, 00:10:40.151 { 00:10:40.151 "name": "BaseBdev2", 00:10:40.151 "uuid": "9cdb805f-d94e-4f56-9bf5-03d912699f98", 00:10:40.151 "is_configured": true, 00:10:40.151 "data_offset": 2048, 00:10:40.151 "data_size": 63488 00:10:40.151 }, 00:10:40.151 { 00:10:40.151 "name": "BaseBdev3", 00:10:40.151 "uuid": "54653778-4a23-4320-8fdf-d0636bff6f2e", 00:10:40.151 "is_configured": true, 00:10:40.151 "data_offset": 2048, 00:10:40.151 "data_size": 63488 00:10:40.151 } 00:10:40.151 ] 00:10:40.151 } 00:10:40.151 } 00:10:40.151 }' 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:40.151 BaseBdev2 00:10:40.151 BaseBdev3' 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:40.151 [2024-10-09 03:13:23.439329] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:40.151 [2024-10-09 03:13:23.439368] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:40.151 [2024-10-09 03:13:23.439456] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:40.151 [2024-10-09 03:13:23.439781] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:40.151 [2024-10-09 03:13:23.439792] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 68148 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 68148 ']' 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 68148 00:10:40.151 03:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:10:40.411 03:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:40.411 03:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68148 00:10:40.411 03:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:40.411 03:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:40.411 killing process with pid 68148 00:10:40.411 03:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68148' 00:10:40.411 03:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 68148 00:10:40.411 [2024-10-09 03:13:23.490115] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:40.411 03:13:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 68148 00:10:40.672 [2024-10-09 03:13:23.824962] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:42.051 03:13:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:42.051 00:10:42.051 real 0m11.084s 00:10:42.051 user 0m17.240s 00:10:42.051 sys 0m2.042s 00:10:42.051 03:13:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:42.051 ************************************ 00:10:42.051 END TEST raid_state_function_test_sb 00:10:42.051 ************************************ 00:10:42.051 03:13:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:42.051 03:13:25 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:10:42.051 03:13:25 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:42.051 03:13:25 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:42.051 03:13:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:42.051 ************************************ 00:10:42.051 START TEST raid_superblock_test 00:10:42.051 ************************************ 00:10:42.051 03:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 3 00:10:42.051 03:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:10:42.051 03:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:42.051 03:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:42.051 03:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:42.051 03:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:42.051 03:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:42.051 03:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:42.051 03:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:42.051 03:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:42.051 03:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:42.051 03:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:42.051 03:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:42.051 03:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:42.051 03:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:10:42.051 03:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:10:42.051 03:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68768 00:10:42.051 03:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:42.051 03:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68768 00:10:42.051 03:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 68768 ']' 00:10:42.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:42.051 03:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:42.051 03:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:42.052 03:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:42.052 03:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:42.052 03:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.310 [2024-10-09 03:13:25.389195] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:10:42.310 [2024-10-09 03:13:25.389399] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68768 ] 00:10:42.310 [2024-10-09 03:13:25.551329] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:42.569 [2024-10-09 03:13:25.803722] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:42.838 [2024-10-09 03:13:26.045225] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:42.838 [2024-10-09 03:13:26.045337] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.105 malloc1 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.105 [2024-10-09 03:13:26.271451] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:43.105 [2024-10-09 03:13:26.271567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.105 [2024-10-09 03:13:26.271613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:43.105 [2024-10-09 03:13:26.271646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.105 [2024-10-09 03:13:26.274083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.105 [2024-10-09 03:13:26.274153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:43.105 pt1 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.105 malloc2 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.105 [2024-10-09 03:13:26.347382] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:43.105 [2024-10-09 03:13:26.347473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.105 [2024-10-09 03:13:26.347515] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:43.105 [2024-10-09 03:13:26.347544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.105 [2024-10-09 03:13:26.349912] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.105 [2024-10-09 03:13:26.349980] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:43.105 pt2 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.105 malloc3 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.105 03:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.365 [2024-10-09 03:13:26.411920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:43.365 [2024-10-09 03:13:26.412004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:43.365 [2024-10-09 03:13:26.412042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:43.365 [2024-10-09 03:13:26.412073] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:43.365 [2024-10-09 03:13:26.414415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:43.365 [2024-10-09 03:13:26.414485] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:43.365 pt3 00:10:43.365 03:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.365 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:43.365 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:43.365 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:43.365 03:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.365 03:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.365 [2024-10-09 03:13:26.423970] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:43.365 [2024-10-09 03:13:26.426068] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:43.365 [2024-10-09 03:13:26.426141] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:43.365 [2024-10-09 03:13:26.426305] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:43.365 [2024-10-09 03:13:26.426319] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:43.365 [2024-10-09 03:13:26.426543] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:43.365 [2024-10-09 03:13:26.426712] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:43.365 [2024-10-09 03:13:26.426723] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:43.365 [2024-10-09 03:13:26.426892] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:43.365 03:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.365 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:43.365 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:43.365 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:43.365 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:43.365 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:43.365 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:43.365 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:43.365 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:43.365 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:43.365 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:43.365 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:43.365 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:43.365 03:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.365 03:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.365 03:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.365 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:43.365 "name": "raid_bdev1", 00:10:43.365 "uuid": "896d0913-6dd7-4ec4-a4f1-87e982cccdce", 00:10:43.365 "strip_size_kb": 0, 00:10:43.365 "state": "online", 00:10:43.365 "raid_level": "raid1", 00:10:43.365 "superblock": true, 00:10:43.366 "num_base_bdevs": 3, 00:10:43.366 "num_base_bdevs_discovered": 3, 00:10:43.366 "num_base_bdevs_operational": 3, 00:10:43.366 "base_bdevs_list": [ 00:10:43.366 { 00:10:43.366 "name": "pt1", 00:10:43.366 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:43.366 "is_configured": true, 00:10:43.366 "data_offset": 2048, 00:10:43.366 "data_size": 63488 00:10:43.366 }, 00:10:43.366 { 00:10:43.366 "name": "pt2", 00:10:43.366 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:43.366 "is_configured": true, 00:10:43.366 "data_offset": 2048, 00:10:43.366 "data_size": 63488 00:10:43.366 }, 00:10:43.366 { 00:10:43.366 "name": "pt3", 00:10:43.366 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:43.366 "is_configured": true, 00:10:43.366 "data_offset": 2048, 00:10:43.366 "data_size": 63488 00:10:43.366 } 00:10:43.366 ] 00:10:43.366 }' 00:10:43.366 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:43.366 03:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.625 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:43.625 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:43.625 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:43.625 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:43.625 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:43.625 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:43.625 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:43.625 03:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.625 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:43.625 03:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.625 [2024-10-09 03:13:26.899370] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:43.625 03:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.885 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:43.885 "name": "raid_bdev1", 00:10:43.885 "aliases": [ 00:10:43.885 "896d0913-6dd7-4ec4-a4f1-87e982cccdce" 00:10:43.885 ], 00:10:43.885 "product_name": "Raid Volume", 00:10:43.885 "block_size": 512, 00:10:43.885 "num_blocks": 63488, 00:10:43.885 "uuid": "896d0913-6dd7-4ec4-a4f1-87e982cccdce", 00:10:43.885 "assigned_rate_limits": { 00:10:43.885 "rw_ios_per_sec": 0, 00:10:43.885 "rw_mbytes_per_sec": 0, 00:10:43.885 "r_mbytes_per_sec": 0, 00:10:43.885 "w_mbytes_per_sec": 0 00:10:43.885 }, 00:10:43.885 "claimed": false, 00:10:43.885 "zoned": false, 00:10:43.885 "supported_io_types": { 00:10:43.885 "read": true, 00:10:43.885 "write": true, 00:10:43.885 "unmap": false, 00:10:43.885 "flush": false, 00:10:43.885 "reset": true, 00:10:43.885 "nvme_admin": false, 00:10:43.885 "nvme_io": false, 00:10:43.885 "nvme_io_md": false, 00:10:43.885 "write_zeroes": true, 00:10:43.885 "zcopy": false, 00:10:43.885 "get_zone_info": false, 00:10:43.885 "zone_management": false, 00:10:43.885 "zone_append": false, 00:10:43.885 "compare": false, 00:10:43.885 "compare_and_write": false, 00:10:43.885 "abort": false, 00:10:43.885 "seek_hole": false, 00:10:43.885 "seek_data": false, 00:10:43.885 "copy": false, 00:10:43.885 "nvme_iov_md": false 00:10:43.885 }, 00:10:43.885 "memory_domains": [ 00:10:43.885 { 00:10:43.885 "dma_device_id": "system", 00:10:43.885 "dma_device_type": 1 00:10:43.885 }, 00:10:43.885 { 00:10:43.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.885 "dma_device_type": 2 00:10:43.885 }, 00:10:43.885 { 00:10:43.885 "dma_device_id": "system", 00:10:43.885 "dma_device_type": 1 00:10:43.885 }, 00:10:43.885 { 00:10:43.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.885 "dma_device_type": 2 00:10:43.885 }, 00:10:43.885 { 00:10:43.885 "dma_device_id": "system", 00:10:43.885 "dma_device_type": 1 00:10:43.885 }, 00:10:43.885 { 00:10:43.885 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.885 "dma_device_type": 2 00:10:43.885 } 00:10:43.885 ], 00:10:43.885 "driver_specific": { 00:10:43.885 "raid": { 00:10:43.885 "uuid": "896d0913-6dd7-4ec4-a4f1-87e982cccdce", 00:10:43.885 "strip_size_kb": 0, 00:10:43.885 "state": "online", 00:10:43.885 "raid_level": "raid1", 00:10:43.885 "superblock": true, 00:10:43.885 "num_base_bdevs": 3, 00:10:43.885 "num_base_bdevs_discovered": 3, 00:10:43.885 "num_base_bdevs_operational": 3, 00:10:43.885 "base_bdevs_list": [ 00:10:43.885 { 00:10:43.885 "name": "pt1", 00:10:43.885 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:43.885 "is_configured": true, 00:10:43.885 "data_offset": 2048, 00:10:43.885 "data_size": 63488 00:10:43.885 }, 00:10:43.885 { 00:10:43.885 "name": "pt2", 00:10:43.885 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:43.885 "is_configured": true, 00:10:43.885 "data_offset": 2048, 00:10:43.885 "data_size": 63488 00:10:43.885 }, 00:10:43.885 { 00:10:43.885 "name": "pt3", 00:10:43.885 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:43.885 "is_configured": true, 00:10:43.885 "data_offset": 2048, 00:10:43.885 "data_size": 63488 00:10:43.885 } 00:10:43.885 ] 00:10:43.885 } 00:10:43.885 } 00:10:43.885 }' 00:10:43.885 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:43.885 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:43.885 pt2 00:10:43.885 pt3' 00:10:43.885 03:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.885 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:43.885 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.885 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.885 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:43.885 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.885 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.885 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.885 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.885 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.885 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.885 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.885 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:43.885 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.885 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.885 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:43.885 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.885 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.885 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.885 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:43.885 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.885 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:43.885 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.885 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.145 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:44.145 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:44.145 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:44.145 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.145 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.145 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:44.145 [2024-10-09 03:13:27.198870] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:44.145 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.145 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=896d0913-6dd7-4ec4-a4f1-87e982cccdce 00:10:44.145 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 896d0913-6dd7-4ec4-a4f1-87e982cccdce ']' 00:10:44.145 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:44.145 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.145 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.145 [2024-10-09 03:13:27.250531] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:44.145 [2024-10-09 03:13:27.250557] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:44.145 [2024-10-09 03:13:27.250635] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:44.145 [2024-10-09 03:13:27.250717] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:44.145 [2024-10-09 03:13:27.250726] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:44.145 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.145 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.145 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:44.145 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.145 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.145 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.145 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:44.145 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:44.145 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:44.145 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:44.145 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.145 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.145 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.145 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:44.145 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:44.145 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.145 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.145 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.145 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:44.146 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:44.146 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.146 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.146 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.146 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:44.146 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:44.146 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.146 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.146 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.146 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:44.146 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:44.146 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:10:44.146 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:44.146 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:10:44.146 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:44.146 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:10:44.146 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:44.146 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:44.146 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.146 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.146 [2024-10-09 03:13:27.402275] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:44.146 [2024-10-09 03:13:27.404467] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:44.146 [2024-10-09 03:13:27.404518] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:44.146 [2024-10-09 03:13:27.404572] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:44.146 [2024-10-09 03:13:27.404617] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:44.146 [2024-10-09 03:13:27.404635] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:44.146 [2024-10-09 03:13:27.404651] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:44.146 [2024-10-09 03:13:27.404660] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:44.146 request: 00:10:44.146 { 00:10:44.146 "name": "raid_bdev1", 00:10:44.146 "raid_level": "raid1", 00:10:44.146 "base_bdevs": [ 00:10:44.146 "malloc1", 00:10:44.146 "malloc2", 00:10:44.146 "malloc3" 00:10:44.146 ], 00:10:44.146 "superblock": false, 00:10:44.146 "method": "bdev_raid_create", 00:10:44.146 "req_id": 1 00:10:44.146 } 00:10:44.146 Got JSON-RPC error response 00:10:44.146 response: 00:10:44.146 { 00:10:44.146 "code": -17, 00:10:44.146 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:44.146 } 00:10:44.146 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:44.146 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:10:44.146 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:44.146 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:44.146 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:44.146 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.146 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.146 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.146 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:44.146 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.405 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:44.406 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:44.406 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:44.406 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.406 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.406 [2024-10-09 03:13:27.466136] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:44.406 [2024-10-09 03:13:27.466225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.406 [2024-10-09 03:13:27.466268] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:44.406 [2024-10-09 03:13:27.466296] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.406 [2024-10-09 03:13:27.468717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.406 [2024-10-09 03:13:27.468784] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:44.406 [2024-10-09 03:13:27.468904] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:44.406 [2024-10-09 03:13:27.468988] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:44.406 pt1 00:10:44.406 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.406 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:44.406 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:44.406 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.406 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:44.406 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:44.406 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.406 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.406 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.406 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.406 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.406 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.406 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.406 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.406 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.406 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.406 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.406 "name": "raid_bdev1", 00:10:44.406 "uuid": "896d0913-6dd7-4ec4-a4f1-87e982cccdce", 00:10:44.406 "strip_size_kb": 0, 00:10:44.406 "state": "configuring", 00:10:44.406 "raid_level": "raid1", 00:10:44.406 "superblock": true, 00:10:44.406 "num_base_bdevs": 3, 00:10:44.406 "num_base_bdevs_discovered": 1, 00:10:44.406 "num_base_bdevs_operational": 3, 00:10:44.406 "base_bdevs_list": [ 00:10:44.406 { 00:10:44.406 "name": "pt1", 00:10:44.406 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:44.406 "is_configured": true, 00:10:44.406 "data_offset": 2048, 00:10:44.406 "data_size": 63488 00:10:44.406 }, 00:10:44.406 { 00:10:44.406 "name": null, 00:10:44.406 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:44.406 "is_configured": false, 00:10:44.406 "data_offset": 2048, 00:10:44.406 "data_size": 63488 00:10:44.406 }, 00:10:44.406 { 00:10:44.406 "name": null, 00:10:44.406 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:44.406 "is_configured": false, 00:10:44.406 "data_offset": 2048, 00:10:44.406 "data_size": 63488 00:10:44.406 } 00:10:44.406 ] 00:10:44.406 }' 00:10:44.406 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.406 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.665 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:44.665 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:44.665 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.665 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.665 [2024-10-09 03:13:27.937388] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:44.665 [2024-10-09 03:13:27.937455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:44.665 [2024-10-09 03:13:27.937481] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:44.665 [2024-10-09 03:13:27.937491] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:44.665 [2024-10-09 03:13:27.938011] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:44.665 [2024-10-09 03:13:27.938046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:44.665 [2024-10-09 03:13:27.938142] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:44.665 [2024-10-09 03:13:27.938166] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:44.665 pt2 00:10:44.665 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.665 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:44.665 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.665 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.665 [2024-10-09 03:13:27.949373] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:44.665 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.665 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:10:44.665 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:44.665 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:44.665 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:44.665 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:44.665 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:44.665 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:44.665 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:44.665 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:44.665 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:44.665 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:44.665 03:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:44.665 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.665 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.924 03:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.924 03:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:44.924 "name": "raid_bdev1", 00:10:44.924 "uuid": "896d0913-6dd7-4ec4-a4f1-87e982cccdce", 00:10:44.924 "strip_size_kb": 0, 00:10:44.924 "state": "configuring", 00:10:44.924 "raid_level": "raid1", 00:10:44.924 "superblock": true, 00:10:44.924 "num_base_bdevs": 3, 00:10:44.924 "num_base_bdevs_discovered": 1, 00:10:44.924 "num_base_bdevs_operational": 3, 00:10:44.924 "base_bdevs_list": [ 00:10:44.924 { 00:10:44.924 "name": "pt1", 00:10:44.924 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:44.924 "is_configured": true, 00:10:44.924 "data_offset": 2048, 00:10:44.924 "data_size": 63488 00:10:44.924 }, 00:10:44.925 { 00:10:44.925 "name": null, 00:10:44.925 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:44.925 "is_configured": false, 00:10:44.925 "data_offset": 0, 00:10:44.925 "data_size": 63488 00:10:44.925 }, 00:10:44.925 { 00:10:44.925 "name": null, 00:10:44.925 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:44.925 "is_configured": false, 00:10:44.925 "data_offset": 2048, 00:10:44.925 "data_size": 63488 00:10:44.925 } 00:10:44.925 ] 00:10:44.925 }' 00:10:44.925 03:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:44.925 03:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.184 03:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:45.184 03:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:45.184 03:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:45.184 03:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.184 03:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.184 [2024-10-09 03:13:28.388673] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:45.184 [2024-10-09 03:13:28.388854] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.184 [2024-10-09 03:13:28.388905] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:45.184 [2024-10-09 03:13:28.388945] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.184 [2024-10-09 03:13:28.389520] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.184 [2024-10-09 03:13:28.389590] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:45.184 [2024-10-09 03:13:28.389735] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:45.184 [2024-10-09 03:13:28.389801] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:45.184 pt2 00:10:45.184 03:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.184 03:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:45.184 03:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:45.184 03:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:45.184 03:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.184 03:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.184 [2024-10-09 03:13:28.400615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:45.184 [2024-10-09 03:13:28.400699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:45.184 [2024-10-09 03:13:28.400735] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:45.184 [2024-10-09 03:13:28.400770] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:45.184 [2024-10-09 03:13:28.401217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:45.184 [2024-10-09 03:13:28.401284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:45.184 [2024-10-09 03:13:28.401373] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:45.184 [2024-10-09 03:13:28.401399] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:45.184 [2024-10-09 03:13:28.401538] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:45.184 [2024-10-09 03:13:28.401550] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:45.184 [2024-10-09 03:13:28.401809] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:45.184 [2024-10-09 03:13:28.402132] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:45.184 [2024-10-09 03:13:28.402151] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:45.184 [2024-10-09 03:13:28.402303] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:45.184 pt3 00:10:45.184 03:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.184 03:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:45.184 03:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:45.184 03:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:45.184 03:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:45.184 03:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:45.184 03:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:45.184 03:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:45.184 03:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:45.184 03:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:45.184 03:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:45.184 03:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:45.184 03:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:45.184 03:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:45.184 03:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:45.184 03:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.184 03:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.184 03:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.184 03:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:45.184 "name": "raid_bdev1", 00:10:45.184 "uuid": "896d0913-6dd7-4ec4-a4f1-87e982cccdce", 00:10:45.184 "strip_size_kb": 0, 00:10:45.184 "state": "online", 00:10:45.184 "raid_level": "raid1", 00:10:45.184 "superblock": true, 00:10:45.184 "num_base_bdevs": 3, 00:10:45.184 "num_base_bdevs_discovered": 3, 00:10:45.184 "num_base_bdevs_operational": 3, 00:10:45.184 "base_bdevs_list": [ 00:10:45.184 { 00:10:45.184 "name": "pt1", 00:10:45.184 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:45.184 "is_configured": true, 00:10:45.184 "data_offset": 2048, 00:10:45.184 "data_size": 63488 00:10:45.184 }, 00:10:45.184 { 00:10:45.184 "name": "pt2", 00:10:45.184 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:45.184 "is_configured": true, 00:10:45.184 "data_offset": 2048, 00:10:45.184 "data_size": 63488 00:10:45.184 }, 00:10:45.184 { 00:10:45.184 "name": "pt3", 00:10:45.184 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:45.184 "is_configured": true, 00:10:45.184 "data_offset": 2048, 00:10:45.184 "data_size": 63488 00:10:45.184 } 00:10:45.184 ] 00:10:45.184 }' 00:10:45.184 03:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:45.184 03:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.754 03:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:45.754 03:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:45.754 03:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:45.754 03:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:45.754 03:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:45.754 03:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:45.754 03:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:45.754 03:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:45.754 03:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.754 03:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.754 [2024-10-09 03:13:28.888186] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:45.754 03:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:45.754 03:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:45.754 "name": "raid_bdev1", 00:10:45.754 "aliases": [ 00:10:45.754 "896d0913-6dd7-4ec4-a4f1-87e982cccdce" 00:10:45.754 ], 00:10:45.754 "product_name": "Raid Volume", 00:10:45.754 "block_size": 512, 00:10:45.754 "num_blocks": 63488, 00:10:45.754 "uuid": "896d0913-6dd7-4ec4-a4f1-87e982cccdce", 00:10:45.754 "assigned_rate_limits": { 00:10:45.754 "rw_ios_per_sec": 0, 00:10:45.754 "rw_mbytes_per_sec": 0, 00:10:45.754 "r_mbytes_per_sec": 0, 00:10:45.754 "w_mbytes_per_sec": 0 00:10:45.754 }, 00:10:45.754 "claimed": false, 00:10:45.754 "zoned": false, 00:10:45.754 "supported_io_types": { 00:10:45.754 "read": true, 00:10:45.754 "write": true, 00:10:45.754 "unmap": false, 00:10:45.754 "flush": false, 00:10:45.754 "reset": true, 00:10:45.754 "nvme_admin": false, 00:10:45.754 "nvme_io": false, 00:10:45.754 "nvme_io_md": false, 00:10:45.754 "write_zeroes": true, 00:10:45.754 "zcopy": false, 00:10:45.754 "get_zone_info": false, 00:10:45.754 "zone_management": false, 00:10:45.754 "zone_append": false, 00:10:45.754 "compare": false, 00:10:45.754 "compare_and_write": false, 00:10:45.754 "abort": false, 00:10:45.754 "seek_hole": false, 00:10:45.754 "seek_data": false, 00:10:45.754 "copy": false, 00:10:45.754 "nvme_iov_md": false 00:10:45.754 }, 00:10:45.754 "memory_domains": [ 00:10:45.754 { 00:10:45.754 "dma_device_id": "system", 00:10:45.754 "dma_device_type": 1 00:10:45.754 }, 00:10:45.754 { 00:10:45.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.754 "dma_device_type": 2 00:10:45.754 }, 00:10:45.754 { 00:10:45.754 "dma_device_id": "system", 00:10:45.754 "dma_device_type": 1 00:10:45.754 }, 00:10:45.754 { 00:10:45.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.754 "dma_device_type": 2 00:10:45.754 }, 00:10:45.754 { 00:10:45.754 "dma_device_id": "system", 00:10:45.754 "dma_device_type": 1 00:10:45.754 }, 00:10:45.754 { 00:10:45.754 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.754 "dma_device_type": 2 00:10:45.754 } 00:10:45.754 ], 00:10:45.754 "driver_specific": { 00:10:45.754 "raid": { 00:10:45.754 "uuid": "896d0913-6dd7-4ec4-a4f1-87e982cccdce", 00:10:45.754 "strip_size_kb": 0, 00:10:45.754 "state": "online", 00:10:45.754 "raid_level": "raid1", 00:10:45.754 "superblock": true, 00:10:45.754 "num_base_bdevs": 3, 00:10:45.754 "num_base_bdevs_discovered": 3, 00:10:45.754 "num_base_bdevs_operational": 3, 00:10:45.754 "base_bdevs_list": [ 00:10:45.754 { 00:10:45.754 "name": "pt1", 00:10:45.754 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:45.754 "is_configured": true, 00:10:45.754 "data_offset": 2048, 00:10:45.754 "data_size": 63488 00:10:45.754 }, 00:10:45.754 { 00:10:45.754 "name": "pt2", 00:10:45.754 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:45.754 "is_configured": true, 00:10:45.754 "data_offset": 2048, 00:10:45.754 "data_size": 63488 00:10:45.754 }, 00:10:45.754 { 00:10:45.754 "name": "pt3", 00:10:45.754 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:45.754 "is_configured": true, 00:10:45.754 "data_offset": 2048, 00:10:45.754 "data_size": 63488 00:10:45.754 } 00:10:45.754 ] 00:10:45.754 } 00:10:45.754 } 00:10:45.754 }' 00:10:45.754 03:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:45.754 03:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:45.754 pt2 00:10:45.754 pt3' 00:10:45.754 03:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.754 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:45.754 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:45.754 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:45.754 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:45.754 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:45.754 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.754 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.014 [2024-10-09 03:13:29.163548] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 896d0913-6dd7-4ec4-a4f1-87e982cccdce '!=' 896d0913-6dd7-4ec4-a4f1-87e982cccdce ']' 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.014 [2024-10-09 03:13:29.211260] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.014 "name": "raid_bdev1", 00:10:46.014 "uuid": "896d0913-6dd7-4ec4-a4f1-87e982cccdce", 00:10:46.014 "strip_size_kb": 0, 00:10:46.014 "state": "online", 00:10:46.014 "raid_level": "raid1", 00:10:46.014 "superblock": true, 00:10:46.014 "num_base_bdevs": 3, 00:10:46.014 "num_base_bdevs_discovered": 2, 00:10:46.014 "num_base_bdevs_operational": 2, 00:10:46.014 "base_bdevs_list": [ 00:10:46.014 { 00:10:46.014 "name": null, 00:10:46.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.014 "is_configured": false, 00:10:46.014 "data_offset": 0, 00:10:46.014 "data_size": 63488 00:10:46.014 }, 00:10:46.014 { 00:10:46.014 "name": "pt2", 00:10:46.014 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:46.014 "is_configured": true, 00:10:46.014 "data_offset": 2048, 00:10:46.014 "data_size": 63488 00:10:46.014 }, 00:10:46.014 { 00:10:46.014 "name": "pt3", 00:10:46.014 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:46.014 "is_configured": true, 00:10:46.014 "data_offset": 2048, 00:10:46.014 "data_size": 63488 00:10:46.014 } 00:10:46.014 ] 00:10:46.014 }' 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.014 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.583 [2024-10-09 03:13:29.606645] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:46.583 [2024-10-09 03:13:29.606692] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:46.583 [2024-10-09 03:13:29.606804] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:46.583 [2024-10-09 03:13:29.606886] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:46.583 [2024-10-09 03:13:29.606905] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.583 [2024-10-09 03:13:29.690467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:46.583 [2024-10-09 03:13:29.690551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.583 [2024-10-09 03:13:29.690573] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:10:46.583 [2024-10-09 03:13:29.690585] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.583 [2024-10-09 03:13:29.693232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.583 [2024-10-09 03:13:29.693340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:46.583 [2024-10-09 03:13:29.693455] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:46.583 [2024-10-09 03:13:29.693513] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:46.583 pt2 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:46.583 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.583 "name": "raid_bdev1", 00:10:46.583 "uuid": "896d0913-6dd7-4ec4-a4f1-87e982cccdce", 00:10:46.583 "strip_size_kb": 0, 00:10:46.583 "state": "configuring", 00:10:46.583 "raid_level": "raid1", 00:10:46.584 "superblock": true, 00:10:46.584 "num_base_bdevs": 3, 00:10:46.584 "num_base_bdevs_discovered": 1, 00:10:46.584 "num_base_bdevs_operational": 2, 00:10:46.584 "base_bdevs_list": [ 00:10:46.584 { 00:10:46.584 "name": null, 00:10:46.584 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.584 "is_configured": false, 00:10:46.584 "data_offset": 2048, 00:10:46.584 "data_size": 63488 00:10:46.584 }, 00:10:46.584 { 00:10:46.584 "name": "pt2", 00:10:46.584 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:46.584 "is_configured": true, 00:10:46.584 "data_offset": 2048, 00:10:46.584 "data_size": 63488 00:10:46.584 }, 00:10:46.584 { 00:10:46.584 "name": null, 00:10:46.584 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:46.584 "is_configured": false, 00:10:46.584 "data_offset": 2048, 00:10:46.584 "data_size": 63488 00:10:46.584 } 00:10:46.584 ] 00:10:46.584 }' 00:10:46.584 03:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.584 03:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.843 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:10:46.843 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:10:46.843 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:10:46.843 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:46.843 03:13:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:46.843 03:13:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.843 [2024-10-09 03:13:30.141689] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:46.843 [2024-10-09 03:13:30.141779] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:46.843 [2024-10-09 03:13:30.141803] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:10:46.843 [2024-10-09 03:13:30.141816] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:46.843 [2024-10-09 03:13:30.142372] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:46.843 [2024-10-09 03:13:30.142405] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:46.843 [2024-10-09 03:13:30.142502] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:46.843 [2024-10-09 03:13:30.142546] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:46.843 [2024-10-09 03:13:30.142682] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:46.843 [2024-10-09 03:13:30.142701] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:46.843 [2024-10-09 03:13:30.142992] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:46.843 [2024-10-09 03:13:30.143169] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:46.843 [2024-10-09 03:13:30.143184] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:46.843 [2024-10-09 03:13:30.143350] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:47.102 pt3 00:10:47.102 03:13:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.102 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:47.102 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:47.102 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:47.102 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:47.102 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:47.102 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:47.102 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.102 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.102 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.102 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.102 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.102 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.102 03:13:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.102 03:13:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.102 03:13:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.102 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.102 "name": "raid_bdev1", 00:10:47.102 "uuid": "896d0913-6dd7-4ec4-a4f1-87e982cccdce", 00:10:47.102 "strip_size_kb": 0, 00:10:47.102 "state": "online", 00:10:47.102 "raid_level": "raid1", 00:10:47.102 "superblock": true, 00:10:47.102 "num_base_bdevs": 3, 00:10:47.102 "num_base_bdevs_discovered": 2, 00:10:47.102 "num_base_bdevs_operational": 2, 00:10:47.102 "base_bdevs_list": [ 00:10:47.102 { 00:10:47.102 "name": null, 00:10:47.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.102 "is_configured": false, 00:10:47.102 "data_offset": 2048, 00:10:47.102 "data_size": 63488 00:10:47.102 }, 00:10:47.102 { 00:10:47.102 "name": "pt2", 00:10:47.102 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:47.102 "is_configured": true, 00:10:47.102 "data_offset": 2048, 00:10:47.102 "data_size": 63488 00:10:47.102 }, 00:10:47.102 { 00:10:47.102 "name": "pt3", 00:10:47.103 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:47.103 "is_configured": true, 00:10:47.103 "data_offset": 2048, 00:10:47.103 "data_size": 63488 00:10:47.103 } 00:10:47.103 ] 00:10:47.103 }' 00:10:47.103 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.103 03:13:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.362 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:47.362 03:13:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.362 03:13:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.362 [2024-10-09 03:13:30.592942] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:47.362 [2024-10-09 03:13:30.592976] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:47.362 [2024-10-09 03:13:30.593049] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:47.362 [2024-10-09 03:13:30.593111] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:47.362 [2024-10-09 03:13:30.593124] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:47.362 03:13:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.362 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.362 03:13:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.362 03:13:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.362 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:10:47.362 03:13:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.362 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:10:47.362 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:10:47.362 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:10:47.362 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:10:47.362 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:10:47.362 03:13:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.362 03:13:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.362 03:13:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.362 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:47.362 03:13:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.362 03:13:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.362 [2024-10-09 03:13:30.664909] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:47.362 [2024-10-09 03:13:30.664961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.363 [2024-10-09 03:13:30.664983] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:10:47.363 [2024-10-09 03:13:30.664993] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.622 [2024-10-09 03:13:30.667437] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.622 [2024-10-09 03:13:30.667471] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:47.622 [2024-10-09 03:13:30.667538] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:47.622 [2024-10-09 03:13:30.667580] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:47.622 [2024-10-09 03:13:30.667694] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:10:47.622 [2024-10-09 03:13:30.667711] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:47.622 [2024-10-09 03:13:30.667727] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:10:47.622 [2024-10-09 03:13:30.667786] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:47.622 pt1 00:10:47.622 03:13:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.622 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:10:47.622 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:10:47.622 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:47.622 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.622 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:47.622 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:47.622 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:47.622 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.622 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.622 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.622 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.622 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.623 03:13:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.623 03:13:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.623 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.623 03:13:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.623 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.623 "name": "raid_bdev1", 00:10:47.623 "uuid": "896d0913-6dd7-4ec4-a4f1-87e982cccdce", 00:10:47.623 "strip_size_kb": 0, 00:10:47.623 "state": "configuring", 00:10:47.623 "raid_level": "raid1", 00:10:47.623 "superblock": true, 00:10:47.623 "num_base_bdevs": 3, 00:10:47.623 "num_base_bdevs_discovered": 1, 00:10:47.623 "num_base_bdevs_operational": 2, 00:10:47.623 "base_bdevs_list": [ 00:10:47.623 { 00:10:47.623 "name": null, 00:10:47.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.623 "is_configured": false, 00:10:47.623 "data_offset": 2048, 00:10:47.623 "data_size": 63488 00:10:47.623 }, 00:10:47.623 { 00:10:47.623 "name": "pt2", 00:10:47.623 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:47.623 "is_configured": true, 00:10:47.623 "data_offset": 2048, 00:10:47.623 "data_size": 63488 00:10:47.623 }, 00:10:47.623 { 00:10:47.623 "name": null, 00:10:47.623 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:47.623 "is_configured": false, 00:10:47.623 "data_offset": 2048, 00:10:47.623 "data_size": 63488 00:10:47.623 } 00:10:47.623 ] 00:10:47.623 }' 00:10:47.623 03:13:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.623 03:13:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.882 03:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:47.882 03:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:10:47.882 03:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.882 03:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.882 03:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.882 03:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:10:47.882 03:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:47.882 03:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.882 03:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.882 [2024-10-09 03:13:31.136122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:47.882 [2024-10-09 03:13:31.136209] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:47.882 [2024-10-09 03:13:31.136236] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:10:47.882 [2024-10-09 03:13:31.136246] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:47.882 [2024-10-09 03:13:31.136813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:47.882 [2024-10-09 03:13:31.136855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:47.882 [2024-10-09 03:13:31.136970] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:47.882 [2024-10-09 03:13:31.137035] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:47.882 [2024-10-09 03:13:31.137220] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:10:47.882 [2024-10-09 03:13:31.137238] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:47.882 [2024-10-09 03:13:31.137564] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:10:47.882 [2024-10-09 03:13:31.137766] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:10:47.882 [2024-10-09 03:13:31.137789] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:10:47.882 [2024-10-09 03:13:31.137992] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:47.882 pt3 00:10:47.882 03:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:47.882 03:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:47.882 03:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:47.882 03:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:47.882 03:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:47.882 03:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:47.882 03:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:47.882 03:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.882 03:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.882 03:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.882 03:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.882 03:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.882 03:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:47.882 03:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:47.882 03:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:47.882 03:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.141 03:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.141 "name": "raid_bdev1", 00:10:48.141 "uuid": "896d0913-6dd7-4ec4-a4f1-87e982cccdce", 00:10:48.141 "strip_size_kb": 0, 00:10:48.141 "state": "online", 00:10:48.141 "raid_level": "raid1", 00:10:48.141 "superblock": true, 00:10:48.141 "num_base_bdevs": 3, 00:10:48.141 "num_base_bdevs_discovered": 2, 00:10:48.141 "num_base_bdevs_operational": 2, 00:10:48.141 "base_bdevs_list": [ 00:10:48.141 { 00:10:48.141 "name": null, 00:10:48.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.141 "is_configured": false, 00:10:48.141 "data_offset": 2048, 00:10:48.141 "data_size": 63488 00:10:48.141 }, 00:10:48.141 { 00:10:48.141 "name": "pt2", 00:10:48.141 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:48.141 "is_configured": true, 00:10:48.141 "data_offset": 2048, 00:10:48.141 "data_size": 63488 00:10:48.141 }, 00:10:48.141 { 00:10:48.141 "name": "pt3", 00:10:48.141 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:48.141 "is_configured": true, 00:10:48.141 "data_offset": 2048, 00:10:48.141 "data_size": 63488 00:10:48.141 } 00:10:48.141 ] 00:10:48.141 }' 00:10:48.141 03:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.141 03:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.402 03:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:10:48.402 03:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.402 03:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.402 03:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:10:48.402 03:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.402 03:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:10:48.402 03:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:48.402 03:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:10:48.402 03:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:48.402 03:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:48.402 [2024-10-09 03:13:31.679639] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:48.402 03:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:48.661 03:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 896d0913-6dd7-4ec4-a4f1-87e982cccdce '!=' 896d0913-6dd7-4ec4-a4f1-87e982cccdce ']' 00:10:48.661 03:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68768 00:10:48.661 03:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 68768 ']' 00:10:48.661 03:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 68768 00:10:48.661 03:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:10:48.661 03:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:48.661 03:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 68768 00:10:48.661 killing process with pid 68768 00:10:48.662 03:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:48.662 03:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:48.662 03:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 68768' 00:10:48.662 03:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 68768 00:10:48.662 [2024-10-09 03:13:31.750610] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:48.662 [2024-10-09 03:13:31.750705] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:48.662 03:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 68768 00:10:48.662 [2024-10-09 03:13:31.750775] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:48.662 [2024-10-09 03:13:31.750789] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:10:48.921 [2024-10-09 03:13:32.136100] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:50.301 03:13:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:50.301 00:10:50.301 real 0m8.303s 00:10:50.301 user 0m12.669s 00:10:50.301 sys 0m1.514s 00:10:50.301 03:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:50.301 03:13:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.301 ************************************ 00:10:50.301 END TEST raid_superblock_test 00:10:50.301 ************************************ 00:10:50.560 03:13:33 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:10:50.560 03:13:33 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:50.560 03:13:33 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:50.560 03:13:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:50.560 ************************************ 00:10:50.560 START TEST raid_read_error_test 00:10:50.560 ************************************ 00:10:50.560 03:13:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 read 00:10:50.560 03:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:50.560 03:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:50.560 03:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:50.560 03:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:50.560 03:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:50.560 03:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:50.560 03:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:50.560 03:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:50.560 03:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:50.560 03:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:50.560 03:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:50.560 03:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:50.560 03:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:50.560 03:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:50.560 03:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:50.560 03:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:50.560 03:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:50.560 03:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:50.560 03:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:50.560 03:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:50.560 03:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:50.560 03:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:50.560 03:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:50.560 03:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:50.560 03:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.hk1eLLnnLP 00:10:50.560 03:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69220 00:10:50.560 03:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69220 00:10:50.560 03:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:50.560 03:13:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 69220 ']' 00:10:50.560 03:13:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:50.560 03:13:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:50.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:50.560 03:13:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:50.560 03:13:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:50.560 03:13:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:50.560 [2024-10-09 03:13:33.774987] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:10:50.560 [2024-10-09 03:13:33.775109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69220 ] 00:10:50.820 [2024-10-09 03:13:33.939019] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:51.079 [2024-10-09 03:13:34.214551] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:51.338 [2024-10-09 03:13:34.468291] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.338 [2024-10-09 03:13:34.468337] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:51.338 03:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:51.338 03:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:51.338 03:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:51.338 03:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:51.338 03:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.338 03:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.597 BaseBdev1_malloc 00:10:51.597 03:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.597 03:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:51.597 03:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.597 03:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.597 true 00:10:51.597 03:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.597 03:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:51.597 03:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.597 03:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.597 [2024-10-09 03:13:34.699718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:51.597 [2024-10-09 03:13:34.699790] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.598 [2024-10-09 03:13:34.699808] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:51.598 [2024-10-09 03:13:34.699820] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.598 [2024-10-09 03:13:34.702377] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.598 [2024-10-09 03:13:34.702421] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:51.598 BaseBdev1 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.598 BaseBdev2_malloc 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.598 true 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.598 [2024-10-09 03:13:34.791950] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:51.598 [2024-10-09 03:13:34.792009] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.598 [2024-10-09 03:13:34.792042] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:51.598 [2024-10-09 03:13:34.792053] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.598 [2024-10-09 03:13:34.794142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.598 [2024-10-09 03:13:34.794185] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:51.598 BaseBdev2 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.598 BaseBdev3_malloc 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.598 true 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.598 [2024-10-09 03:13:34.860103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:51.598 [2024-10-09 03:13:34.860163] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:51.598 [2024-10-09 03:13:34.860181] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:51.598 [2024-10-09 03:13:34.860192] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:51.598 [2024-10-09 03:13:34.862458] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:51.598 [2024-10-09 03:13:34.862515] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:51.598 BaseBdev3 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.598 [2024-10-09 03:13:34.872174] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:51.598 [2024-10-09 03:13:34.874535] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:51.598 [2024-10-09 03:13:34.874630] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:51.598 [2024-10-09 03:13:34.874914] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:51.598 [2024-10-09 03:13:34.874940] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:51.598 [2024-10-09 03:13:34.875253] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:51.598 [2024-10-09 03:13:34.875468] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:51.598 [2024-10-09 03:13:34.875504] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:51.598 [2024-10-09 03:13:34.875714] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:51.598 03:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:51.857 03:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.857 "name": "raid_bdev1", 00:10:51.858 "uuid": "040f236a-05b6-472f-9ab9-0046e1d58906", 00:10:51.858 "strip_size_kb": 0, 00:10:51.858 "state": "online", 00:10:51.858 "raid_level": "raid1", 00:10:51.858 "superblock": true, 00:10:51.858 "num_base_bdevs": 3, 00:10:51.858 "num_base_bdevs_discovered": 3, 00:10:51.858 "num_base_bdevs_operational": 3, 00:10:51.858 "base_bdevs_list": [ 00:10:51.858 { 00:10:51.858 "name": "BaseBdev1", 00:10:51.858 "uuid": "4a20eb08-991a-58cd-ad5d-92ca56810a0f", 00:10:51.858 "is_configured": true, 00:10:51.858 "data_offset": 2048, 00:10:51.858 "data_size": 63488 00:10:51.858 }, 00:10:51.858 { 00:10:51.858 "name": "BaseBdev2", 00:10:51.858 "uuid": "3ac77365-caaf-5b06-8ae1-7e97b6bdc3da", 00:10:51.858 "is_configured": true, 00:10:51.858 "data_offset": 2048, 00:10:51.858 "data_size": 63488 00:10:51.858 }, 00:10:51.858 { 00:10:51.858 "name": "BaseBdev3", 00:10:51.858 "uuid": "d8eee8c6-cd14-5528-bb93-38c35aabf1e3", 00:10:51.858 "is_configured": true, 00:10:51.858 "data_offset": 2048, 00:10:51.858 "data_size": 63488 00:10:51.858 } 00:10:51.858 ] 00:10:51.858 }' 00:10:51.858 03:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.858 03:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:52.117 03:13:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:52.117 03:13:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:52.377 [2024-10-09 03:13:35.432459] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:53.315 03:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:53.315 03:13:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.315 03:13:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.315 03:13:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.315 03:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:53.315 03:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:53.315 03:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:10:53.315 03:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:53.315 03:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:53.315 03:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:53.315 03:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:53.315 03:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:53.315 03:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:53.315 03:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.315 03:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.315 03:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.315 03:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.315 03:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.315 03:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.315 03:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:53.315 03:13:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.315 03:13:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.315 03:13:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.315 03:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.315 "name": "raid_bdev1", 00:10:53.315 "uuid": "040f236a-05b6-472f-9ab9-0046e1d58906", 00:10:53.315 "strip_size_kb": 0, 00:10:53.315 "state": "online", 00:10:53.315 "raid_level": "raid1", 00:10:53.315 "superblock": true, 00:10:53.315 "num_base_bdevs": 3, 00:10:53.315 "num_base_bdevs_discovered": 3, 00:10:53.315 "num_base_bdevs_operational": 3, 00:10:53.315 "base_bdevs_list": [ 00:10:53.315 { 00:10:53.315 "name": "BaseBdev1", 00:10:53.315 "uuid": "4a20eb08-991a-58cd-ad5d-92ca56810a0f", 00:10:53.315 "is_configured": true, 00:10:53.315 "data_offset": 2048, 00:10:53.315 "data_size": 63488 00:10:53.315 }, 00:10:53.315 { 00:10:53.315 "name": "BaseBdev2", 00:10:53.315 "uuid": "3ac77365-caaf-5b06-8ae1-7e97b6bdc3da", 00:10:53.315 "is_configured": true, 00:10:53.315 "data_offset": 2048, 00:10:53.315 "data_size": 63488 00:10:53.315 }, 00:10:53.315 { 00:10:53.315 "name": "BaseBdev3", 00:10:53.315 "uuid": "d8eee8c6-cd14-5528-bb93-38c35aabf1e3", 00:10:53.315 "is_configured": true, 00:10:53.315 "data_offset": 2048, 00:10:53.315 "data_size": 63488 00:10:53.315 } 00:10:53.315 ] 00:10:53.315 }' 00:10:53.316 03:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.316 03:13:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.579 03:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:53.579 03:13:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:53.579 03:13:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:53.579 [2024-10-09 03:13:36.845932] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:53.579 [2024-10-09 03:13:36.846019] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:53.579 [2024-10-09 03:13:36.848905] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:53.579 [2024-10-09 03:13:36.849006] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:53.579 [2024-10-09 03:13:36.849140] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:53.579 [2024-10-09 03:13:36.849211] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:53.579 { 00:10:53.579 "results": [ 00:10:53.579 { 00:10:53.579 "job": "raid_bdev1", 00:10:53.579 "core_mask": "0x1", 00:10:53.579 "workload": "randrw", 00:10:53.579 "percentage": 50, 00:10:53.579 "status": "finished", 00:10:53.579 "queue_depth": 1, 00:10:53.579 "io_size": 131072, 00:10:53.579 "runtime": 1.41442, 00:10:53.579 "iops": 13164.406611897457, 00:10:53.579 "mibps": 1645.5508264871821, 00:10:53.579 "io_failed": 0, 00:10:53.579 "io_timeout": 0, 00:10:53.579 "avg_latency_us": 73.24318932077543, 00:10:53.579 "min_latency_us": 23.699563318777294, 00:10:53.579 "max_latency_us": 1609.7816593886462 00:10:53.579 } 00:10:53.579 ], 00:10:53.579 "core_count": 1 00:10:53.579 } 00:10:53.579 03:13:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:53.579 03:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69220 00:10:53.579 03:13:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 69220 ']' 00:10:53.579 03:13:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 69220 00:10:53.579 03:13:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:10:53.579 03:13:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:53.579 03:13:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69220 00:10:53.839 03:13:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:53.839 03:13:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:53.839 killing process with pid 69220 00:10:53.839 03:13:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69220' 00:10:53.839 03:13:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 69220 00:10:53.839 [2024-10-09 03:13:36.895721] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:53.839 03:13:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 69220 00:10:53.839 [2024-10-09 03:13:37.130374] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:55.219 03:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.hk1eLLnnLP 00:10:55.219 03:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:55.219 03:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:55.219 03:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:10:55.219 03:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:10:55.219 ************************************ 00:10:55.219 END TEST raid_read_error_test 00:10:55.219 ************************************ 00:10:55.219 03:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:55.219 03:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:10:55.219 03:13:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:10:55.219 00:10:55.219 real 0m4.815s 00:10:55.219 user 0m5.595s 00:10:55.219 sys 0m0.678s 00:10:55.219 03:13:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:55.219 03:13:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.479 03:13:38 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:10:55.479 03:13:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:10:55.479 03:13:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:55.479 03:13:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:55.479 ************************************ 00:10:55.479 START TEST raid_write_error_test 00:10:55.479 ************************************ 00:10:55.479 03:13:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 3 write 00:10:55.479 03:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:10:55.479 03:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:55.479 03:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:55.479 03:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:55.479 03:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:55.479 03:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:55.479 03:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:55.479 03:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:55.479 03:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:55.479 03:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:55.479 03:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:55.479 03:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:55.479 03:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:55.479 03:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:55.479 03:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:55.479 03:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:55.479 03:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:55.479 03:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:55.479 03:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:55.479 03:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:55.479 03:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:55.479 03:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:10:55.479 03:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:10:55.479 03:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:55.479 03:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.rGLQ5b2rs8 00:10:55.479 03:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69365 00:10:55.479 03:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:55.479 03:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69365 00:10:55.479 03:13:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 69365 ']' 00:10:55.479 03:13:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:55.479 03:13:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:55.479 03:13:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:55.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:55.479 03:13:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:55.479 03:13:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:55.479 [2024-10-09 03:13:38.658673] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:10:55.479 [2024-10-09 03:13:38.658825] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69365 ] 00:10:55.739 [2024-10-09 03:13:38.822332] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:55.997 [2024-10-09 03:13:39.077607] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.998 [2024-10-09 03:13:39.280232] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:55.998 [2024-10-09 03:13:39.280274] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:56.256 03:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:56.256 03:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:10:56.256 03:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:56.256 03:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:56.256 03:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.256 03:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.256 BaseBdev1_malloc 00:10:56.256 03:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.256 03:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:56.256 03:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.256 03:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.256 true 00:10:56.256 03:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.256 03:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:56.256 03:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.256 03:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.515 [2024-10-09 03:13:39.560271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:56.515 [2024-10-09 03:13:39.560324] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.515 [2024-10-09 03:13:39.560342] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:56.515 [2024-10-09 03:13:39.560352] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.515 [2024-10-09 03:13:39.562635] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.515 [2024-10-09 03:13:39.562675] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:56.515 BaseBdev1 00:10:56.515 03:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.515 03:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:56.515 03:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:56.515 03:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.515 03:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.515 BaseBdev2_malloc 00:10:56.515 03:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.515 03:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:56.515 03:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.515 03:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.515 true 00:10:56.515 03:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.515 03:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:56.515 03:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.515 03:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.515 [2024-10-09 03:13:39.636081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:56.515 [2024-10-09 03:13:39.636132] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.515 [2024-10-09 03:13:39.636149] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:56.515 [2024-10-09 03:13:39.636159] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.515 [2024-10-09 03:13:39.638336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.515 [2024-10-09 03:13:39.638472] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:56.515 BaseBdev2 00:10:56.515 03:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.515 03:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:56.515 03:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:56.515 03:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.515 03:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.515 BaseBdev3_malloc 00:10:56.515 03:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.515 03:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:56.515 03:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.515 03:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.515 true 00:10:56.515 03:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.515 03:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:56.515 03:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.515 03:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.515 [2024-10-09 03:13:39.703468] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:56.515 [2024-10-09 03:13:39.703519] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:56.515 [2024-10-09 03:13:39.703535] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:56.515 [2024-10-09 03:13:39.703546] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:56.515 [2024-10-09 03:13:39.705766] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:56.515 [2024-10-09 03:13:39.705807] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:56.515 BaseBdev3 00:10:56.515 03:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.515 03:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:56.515 03:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.515 03:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.515 [2024-10-09 03:13:39.715512] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:56.515 [2024-10-09 03:13:39.717462] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:56.515 [2024-10-09 03:13:39.717538] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:56.515 [2024-10-09 03:13:39.717740] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:56.515 [2024-10-09 03:13:39.717753] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:10:56.515 [2024-10-09 03:13:39.718004] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:56.515 [2024-10-09 03:13:39.718179] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:56.515 [2024-10-09 03:13:39.718193] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:56.515 [2024-10-09 03:13:39.718341] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:56.515 03:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.515 03:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:10:56.515 03:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:56.515 03:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:56.515 03:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:56.515 03:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:56.516 03:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:56.516 03:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:56.516 03:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:56.516 03:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:56.516 03:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:56.516 03:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:56.516 03:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:56.516 03:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:56.516 03:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:56.516 03:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:56.516 03:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:56.516 "name": "raid_bdev1", 00:10:56.516 "uuid": "3e2aff2d-5f1e-4b51-8344-2dfccd9dc9bd", 00:10:56.516 "strip_size_kb": 0, 00:10:56.516 "state": "online", 00:10:56.516 "raid_level": "raid1", 00:10:56.516 "superblock": true, 00:10:56.516 "num_base_bdevs": 3, 00:10:56.516 "num_base_bdevs_discovered": 3, 00:10:56.516 "num_base_bdevs_operational": 3, 00:10:56.516 "base_bdevs_list": [ 00:10:56.516 { 00:10:56.516 "name": "BaseBdev1", 00:10:56.516 "uuid": "2421f8e6-2fac-5173-9396-593bdcc531ac", 00:10:56.516 "is_configured": true, 00:10:56.516 "data_offset": 2048, 00:10:56.516 "data_size": 63488 00:10:56.516 }, 00:10:56.516 { 00:10:56.516 "name": "BaseBdev2", 00:10:56.516 "uuid": "5f4b4c76-07ef-5f5d-9130-636d230a22ef", 00:10:56.516 "is_configured": true, 00:10:56.516 "data_offset": 2048, 00:10:56.516 "data_size": 63488 00:10:56.516 }, 00:10:56.516 { 00:10:56.516 "name": "BaseBdev3", 00:10:56.516 "uuid": "0602ca00-4bca-5093-9402-bef43f5f27b4", 00:10:56.516 "is_configured": true, 00:10:56.516 "data_offset": 2048, 00:10:56.516 "data_size": 63488 00:10:56.516 } 00:10:56.516 ] 00:10:56.516 }' 00:10:56.516 03:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:56.516 03:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.083 03:13:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:57.083 03:13:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:57.083 [2024-10-09 03:13:40.279816] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:10:58.020 03:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:58.020 03:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.020 03:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.020 [2024-10-09 03:13:41.200296] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:10:58.020 [2024-10-09 03:13:41.200350] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:58.020 [2024-10-09 03:13:41.200589] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000005fb0 00:10:58.020 03:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.020 03:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:58.020 03:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:10:58.020 03:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:10:58.020 03:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:10:58.020 03:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:10:58.020 03:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.020 03:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.020 03:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:10:58.020 03:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:10:58.020 03:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:58.020 03:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.020 03:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.020 03:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.020 03:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.020 03:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.020 03:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.020 03:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.020 03:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.020 03:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.020 03:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.020 "name": "raid_bdev1", 00:10:58.020 "uuid": "3e2aff2d-5f1e-4b51-8344-2dfccd9dc9bd", 00:10:58.020 "strip_size_kb": 0, 00:10:58.020 "state": "online", 00:10:58.020 "raid_level": "raid1", 00:10:58.020 "superblock": true, 00:10:58.020 "num_base_bdevs": 3, 00:10:58.020 "num_base_bdevs_discovered": 2, 00:10:58.020 "num_base_bdevs_operational": 2, 00:10:58.020 "base_bdevs_list": [ 00:10:58.020 { 00:10:58.020 "name": null, 00:10:58.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.020 "is_configured": false, 00:10:58.020 "data_offset": 0, 00:10:58.020 "data_size": 63488 00:10:58.020 }, 00:10:58.020 { 00:10:58.020 "name": "BaseBdev2", 00:10:58.020 "uuid": "5f4b4c76-07ef-5f5d-9130-636d230a22ef", 00:10:58.020 "is_configured": true, 00:10:58.020 "data_offset": 2048, 00:10:58.020 "data_size": 63488 00:10:58.020 }, 00:10:58.020 { 00:10:58.020 "name": "BaseBdev3", 00:10:58.020 "uuid": "0602ca00-4bca-5093-9402-bef43f5f27b4", 00:10:58.020 "is_configured": true, 00:10:58.020 "data_offset": 2048, 00:10:58.020 "data_size": 63488 00:10:58.020 } 00:10:58.020 ] 00:10:58.020 }' 00:10:58.020 03:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.020 03:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.590 03:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:58.590 03:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:58.590 03:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.590 [2024-10-09 03:13:41.659962] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:58.590 [2024-10-09 03:13:41.660067] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:58.590 [2024-10-09 03:13:41.662895] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:58.590 [2024-10-09 03:13:41.662983] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:58.590 [2024-10-09 03:13:41.663082] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:58.590 [2024-10-09 03:13:41.663155] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:58.590 { 00:10:58.590 "results": [ 00:10:58.590 { 00:10:58.590 "job": "raid_bdev1", 00:10:58.590 "core_mask": "0x1", 00:10:58.590 "workload": "randrw", 00:10:58.590 "percentage": 50, 00:10:58.590 "status": "finished", 00:10:58.590 "queue_depth": 1, 00:10:58.590 "io_size": 131072, 00:10:58.590 "runtime": 1.381031, 00:10:58.590 "iops": 14757.814994739438, 00:10:58.590 "mibps": 1844.7268743424297, 00:10:58.590 "io_failed": 0, 00:10:58.590 "io_timeout": 0, 00:10:58.590 "avg_latency_us": 65.07747103272185, 00:10:58.590 "min_latency_us": 23.58777292576419, 00:10:58.590 "max_latency_us": 1767.1825327510917 00:10:58.590 } 00:10:58.590 ], 00:10:58.590 "core_count": 1 00:10:58.590 } 00:10:58.590 03:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:58.590 03:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69365 00:10:58.590 03:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 69365 ']' 00:10:58.590 03:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 69365 00:10:58.590 03:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:10:58.590 03:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:58.590 03:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69365 00:10:58.590 03:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:58.590 03:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:58.590 03:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69365' 00:10:58.590 killing process with pid 69365 00:10:58.590 03:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 69365 00:10:58.590 [2024-10-09 03:13:41.694301] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:58.590 03:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 69365 00:10:58.850 [2024-10-09 03:13:41.933046] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:00.230 03:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:00.230 03:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.rGLQ5b2rs8 00:11:00.230 03:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:00.230 03:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:00.230 03:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:00.230 ************************************ 00:11:00.230 END TEST raid_write_error_test 00:11:00.230 ************************************ 00:11:00.230 03:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:00.230 03:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:00.230 03:13:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:00.230 00:11:00.230 real 0m4.765s 00:11:00.230 user 0m5.628s 00:11:00.230 sys 0m0.553s 00:11:00.230 03:13:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:00.230 03:13:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.230 03:13:43 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:11:00.230 03:13:43 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:00.230 03:13:43 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:11:00.230 03:13:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:00.230 03:13:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:00.230 03:13:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:00.230 ************************************ 00:11:00.230 START TEST raid_state_function_test 00:11:00.230 ************************************ 00:11:00.230 03:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 false 00:11:00.230 03:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:00.230 03:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:00.230 03:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:00.230 03:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:00.230 03:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:00.230 03:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:00.230 03:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:00.230 03:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:00.230 03:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:00.230 03:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:00.230 03:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:00.230 03:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:00.231 03:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:00.231 03:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:00.231 03:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:00.231 03:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:00.231 03:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:00.231 03:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:00.231 03:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:00.231 03:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:00.231 03:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:00.231 03:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:00.231 03:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:00.231 03:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:00.231 03:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:00.231 03:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:00.231 03:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:00.231 03:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:00.231 03:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:00.231 Process raid pid: 69513 00:11:00.231 03:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69513 00:11:00.231 03:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:00.231 03:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69513' 00:11:00.231 03:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69513 00:11:00.231 03:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 69513 ']' 00:11:00.231 03:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:00.231 03:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:00.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:00.231 03:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:00.231 03:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:00.231 03:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.231 [2024-10-09 03:13:43.486502] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:11:00.231 [2024-10-09 03:13:43.486620] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:00.490 [2024-10-09 03:13:43.651097] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:00.749 [2024-10-09 03:13:43.861772] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:01.009 [2024-10-09 03:13:44.065930] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.009 [2024-10-09 03:13:44.065968] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:01.269 03:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:01.269 03:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:11:01.269 03:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:01.269 03:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.269 03:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.269 [2024-10-09 03:13:44.322816] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:01.269 [2024-10-09 03:13:44.322878] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:01.269 [2024-10-09 03:13:44.322892] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:01.269 [2024-10-09 03:13:44.322903] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:01.269 [2024-10-09 03:13:44.322909] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:01.269 [2024-10-09 03:13:44.322918] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:01.269 [2024-10-09 03:13:44.322924] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:01.269 [2024-10-09 03:13:44.322933] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:01.269 03:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.269 03:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:01.269 03:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.269 03:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.269 03:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:01.269 03:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.269 03:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.269 03:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.269 03:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.269 03:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.269 03:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.269 03:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.269 03:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.269 03:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.269 03:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.269 03:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.269 03:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.269 "name": "Existed_Raid", 00:11:01.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.269 "strip_size_kb": 64, 00:11:01.269 "state": "configuring", 00:11:01.269 "raid_level": "raid0", 00:11:01.269 "superblock": false, 00:11:01.269 "num_base_bdevs": 4, 00:11:01.269 "num_base_bdevs_discovered": 0, 00:11:01.269 "num_base_bdevs_operational": 4, 00:11:01.269 "base_bdevs_list": [ 00:11:01.269 { 00:11:01.269 "name": "BaseBdev1", 00:11:01.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.269 "is_configured": false, 00:11:01.269 "data_offset": 0, 00:11:01.269 "data_size": 0 00:11:01.269 }, 00:11:01.269 { 00:11:01.269 "name": "BaseBdev2", 00:11:01.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.269 "is_configured": false, 00:11:01.269 "data_offset": 0, 00:11:01.269 "data_size": 0 00:11:01.269 }, 00:11:01.269 { 00:11:01.269 "name": "BaseBdev3", 00:11:01.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.269 "is_configured": false, 00:11:01.269 "data_offset": 0, 00:11:01.269 "data_size": 0 00:11:01.269 }, 00:11:01.269 { 00:11:01.269 "name": "BaseBdev4", 00:11:01.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.269 "is_configured": false, 00:11:01.269 "data_offset": 0, 00:11:01.269 "data_size": 0 00:11:01.269 } 00:11:01.269 ] 00:11:01.269 }' 00:11:01.269 03:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.269 03:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.538 03:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:01.538 03:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.538 03:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.538 [2024-10-09 03:13:44.766004] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:01.538 [2024-10-09 03:13:44.766106] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:01.538 03:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.538 03:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:01.538 03:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.538 03:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.538 [2024-10-09 03:13:44.777989] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:01.538 [2024-10-09 03:13:44.778066] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:01.539 [2024-10-09 03:13:44.778093] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:01.539 [2024-10-09 03:13:44.778115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:01.539 [2024-10-09 03:13:44.778132] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:01.539 [2024-10-09 03:13:44.778152] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:01.539 [2024-10-09 03:13:44.778169] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:01.539 [2024-10-09 03:13:44.778189] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:01.539 03:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.539 03:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:01.539 03:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.539 03:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.539 [2024-10-09 03:13:44.836239] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:01.828 BaseBdev1 00:11:01.828 03:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.828 03:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:01.828 03:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:01.828 03:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:01.828 03:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:01.828 03:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:01.828 03:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:01.828 03:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:01.828 03:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.828 03:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.828 03:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.828 03:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:01.828 03:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.828 03:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.828 [ 00:11:01.828 { 00:11:01.828 "name": "BaseBdev1", 00:11:01.828 "aliases": [ 00:11:01.828 "b5ee4e31-383a-4bc5-b3ea-594c5d559ede" 00:11:01.828 ], 00:11:01.828 "product_name": "Malloc disk", 00:11:01.828 "block_size": 512, 00:11:01.828 "num_blocks": 65536, 00:11:01.828 "uuid": "b5ee4e31-383a-4bc5-b3ea-594c5d559ede", 00:11:01.828 "assigned_rate_limits": { 00:11:01.828 "rw_ios_per_sec": 0, 00:11:01.828 "rw_mbytes_per_sec": 0, 00:11:01.828 "r_mbytes_per_sec": 0, 00:11:01.828 "w_mbytes_per_sec": 0 00:11:01.828 }, 00:11:01.828 "claimed": true, 00:11:01.828 "claim_type": "exclusive_write", 00:11:01.828 "zoned": false, 00:11:01.828 "supported_io_types": { 00:11:01.828 "read": true, 00:11:01.828 "write": true, 00:11:01.828 "unmap": true, 00:11:01.828 "flush": true, 00:11:01.828 "reset": true, 00:11:01.828 "nvme_admin": false, 00:11:01.828 "nvme_io": false, 00:11:01.828 "nvme_io_md": false, 00:11:01.828 "write_zeroes": true, 00:11:01.828 "zcopy": true, 00:11:01.828 "get_zone_info": false, 00:11:01.828 "zone_management": false, 00:11:01.828 "zone_append": false, 00:11:01.828 "compare": false, 00:11:01.828 "compare_and_write": false, 00:11:01.828 "abort": true, 00:11:01.828 "seek_hole": false, 00:11:01.828 "seek_data": false, 00:11:01.828 "copy": true, 00:11:01.828 "nvme_iov_md": false 00:11:01.828 }, 00:11:01.828 "memory_domains": [ 00:11:01.828 { 00:11:01.828 "dma_device_id": "system", 00:11:01.828 "dma_device_type": 1 00:11:01.828 }, 00:11:01.828 { 00:11:01.828 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.828 "dma_device_type": 2 00:11:01.828 } 00:11:01.828 ], 00:11:01.828 "driver_specific": {} 00:11:01.828 } 00:11:01.828 ] 00:11:01.828 03:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.828 03:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:01.828 03:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:01.828 03:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:01.828 03:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:01.828 03:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:01.828 03:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:01.828 03:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:01.828 03:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:01.828 03:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:01.828 03:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:01.828 03:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:01.828 03:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:01.828 03:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.828 03:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:01.828 03:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.828 03:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:01.828 03:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:01.828 "name": "Existed_Raid", 00:11:01.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.828 "strip_size_kb": 64, 00:11:01.828 "state": "configuring", 00:11:01.828 "raid_level": "raid0", 00:11:01.828 "superblock": false, 00:11:01.828 "num_base_bdevs": 4, 00:11:01.828 "num_base_bdevs_discovered": 1, 00:11:01.828 "num_base_bdevs_operational": 4, 00:11:01.828 "base_bdevs_list": [ 00:11:01.828 { 00:11:01.828 "name": "BaseBdev1", 00:11:01.828 "uuid": "b5ee4e31-383a-4bc5-b3ea-594c5d559ede", 00:11:01.828 "is_configured": true, 00:11:01.828 "data_offset": 0, 00:11:01.828 "data_size": 65536 00:11:01.828 }, 00:11:01.828 { 00:11:01.828 "name": "BaseBdev2", 00:11:01.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.828 "is_configured": false, 00:11:01.828 "data_offset": 0, 00:11:01.828 "data_size": 0 00:11:01.828 }, 00:11:01.828 { 00:11:01.828 "name": "BaseBdev3", 00:11:01.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.828 "is_configured": false, 00:11:01.828 "data_offset": 0, 00:11:01.828 "data_size": 0 00:11:01.828 }, 00:11:01.828 { 00:11:01.828 "name": "BaseBdev4", 00:11:01.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:01.828 "is_configured": false, 00:11:01.828 "data_offset": 0, 00:11:01.828 "data_size": 0 00:11:01.828 } 00:11:01.828 ] 00:11:01.828 }' 00:11:01.828 03:13:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:01.828 03:13:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.088 03:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:02.088 03:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.088 03:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.088 [2024-10-09 03:13:45.339449] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:02.088 [2024-10-09 03:13:45.339508] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:02.088 03:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.089 03:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:02.089 03:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.089 03:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.089 [2024-10-09 03:13:45.351449] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:02.089 [2024-10-09 03:13:45.353330] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:02.089 [2024-10-09 03:13:45.353423] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:02.089 [2024-10-09 03:13:45.353453] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:02.089 [2024-10-09 03:13:45.353478] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:02.089 [2024-10-09 03:13:45.353498] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:02.089 [2024-10-09 03:13:45.353518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:02.089 03:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.089 03:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:02.089 03:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:02.089 03:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:02.089 03:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.089 03:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.089 03:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:02.089 03:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.089 03:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.089 03:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.089 03:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.089 03:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.089 03:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.089 03:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.089 03:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.089 03:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.089 03:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.089 03:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.348 03:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.348 "name": "Existed_Raid", 00:11:02.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.348 "strip_size_kb": 64, 00:11:02.348 "state": "configuring", 00:11:02.348 "raid_level": "raid0", 00:11:02.348 "superblock": false, 00:11:02.348 "num_base_bdevs": 4, 00:11:02.348 "num_base_bdevs_discovered": 1, 00:11:02.348 "num_base_bdevs_operational": 4, 00:11:02.348 "base_bdevs_list": [ 00:11:02.348 { 00:11:02.348 "name": "BaseBdev1", 00:11:02.348 "uuid": "b5ee4e31-383a-4bc5-b3ea-594c5d559ede", 00:11:02.348 "is_configured": true, 00:11:02.348 "data_offset": 0, 00:11:02.348 "data_size": 65536 00:11:02.348 }, 00:11:02.348 { 00:11:02.348 "name": "BaseBdev2", 00:11:02.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.348 "is_configured": false, 00:11:02.348 "data_offset": 0, 00:11:02.348 "data_size": 0 00:11:02.348 }, 00:11:02.348 { 00:11:02.348 "name": "BaseBdev3", 00:11:02.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.348 "is_configured": false, 00:11:02.348 "data_offset": 0, 00:11:02.348 "data_size": 0 00:11:02.348 }, 00:11:02.348 { 00:11:02.348 "name": "BaseBdev4", 00:11:02.348 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.348 "is_configured": false, 00:11:02.348 "data_offset": 0, 00:11:02.348 "data_size": 0 00:11:02.348 } 00:11:02.348 ] 00:11:02.348 }' 00:11:02.348 03:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.348 03:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.607 03:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:02.607 03:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.607 03:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.607 [2024-10-09 03:13:45.876758] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:02.607 BaseBdev2 00:11:02.607 03:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.607 03:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:02.607 03:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:02.607 03:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:02.607 03:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:02.607 03:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:02.607 03:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:02.607 03:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:02.607 03:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.607 03:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.607 03:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.607 03:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:02.607 03:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.607 03:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.607 [ 00:11:02.607 { 00:11:02.607 "name": "BaseBdev2", 00:11:02.607 "aliases": [ 00:11:02.607 "8ee2dffe-517c-42df-81ef-ff294b48d9a6" 00:11:02.607 ], 00:11:02.607 "product_name": "Malloc disk", 00:11:02.607 "block_size": 512, 00:11:02.607 "num_blocks": 65536, 00:11:02.607 "uuid": "8ee2dffe-517c-42df-81ef-ff294b48d9a6", 00:11:02.607 "assigned_rate_limits": { 00:11:02.607 "rw_ios_per_sec": 0, 00:11:02.607 "rw_mbytes_per_sec": 0, 00:11:02.607 "r_mbytes_per_sec": 0, 00:11:02.607 "w_mbytes_per_sec": 0 00:11:02.607 }, 00:11:02.607 "claimed": true, 00:11:02.607 "claim_type": "exclusive_write", 00:11:02.607 "zoned": false, 00:11:02.607 "supported_io_types": { 00:11:02.607 "read": true, 00:11:02.607 "write": true, 00:11:02.607 "unmap": true, 00:11:02.607 "flush": true, 00:11:02.607 "reset": true, 00:11:02.607 "nvme_admin": false, 00:11:02.607 "nvme_io": false, 00:11:02.607 "nvme_io_md": false, 00:11:02.607 "write_zeroes": true, 00:11:02.607 "zcopy": true, 00:11:02.607 "get_zone_info": false, 00:11:02.607 "zone_management": false, 00:11:02.607 "zone_append": false, 00:11:02.865 "compare": false, 00:11:02.865 "compare_and_write": false, 00:11:02.865 "abort": true, 00:11:02.865 "seek_hole": false, 00:11:02.865 "seek_data": false, 00:11:02.865 "copy": true, 00:11:02.865 "nvme_iov_md": false 00:11:02.865 }, 00:11:02.865 "memory_domains": [ 00:11:02.865 { 00:11:02.865 "dma_device_id": "system", 00:11:02.865 "dma_device_type": 1 00:11:02.865 }, 00:11:02.865 { 00:11:02.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:02.865 "dma_device_type": 2 00:11:02.865 } 00:11:02.865 ], 00:11:02.865 "driver_specific": {} 00:11:02.865 } 00:11:02.865 ] 00:11:02.865 03:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.865 03:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:02.865 03:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:02.865 03:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:02.865 03:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:02.865 03:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:02.865 03:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:02.865 03:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:02.865 03:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:02.865 03:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:02.865 03:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:02.865 03:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:02.865 03:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:02.865 03:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:02.865 03:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:02.865 03:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:02.865 03:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:02.865 03:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.865 03:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:02.865 03:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:02.865 "name": "Existed_Raid", 00:11:02.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.865 "strip_size_kb": 64, 00:11:02.865 "state": "configuring", 00:11:02.865 "raid_level": "raid0", 00:11:02.865 "superblock": false, 00:11:02.865 "num_base_bdevs": 4, 00:11:02.865 "num_base_bdevs_discovered": 2, 00:11:02.865 "num_base_bdevs_operational": 4, 00:11:02.865 "base_bdevs_list": [ 00:11:02.865 { 00:11:02.865 "name": "BaseBdev1", 00:11:02.865 "uuid": "b5ee4e31-383a-4bc5-b3ea-594c5d559ede", 00:11:02.865 "is_configured": true, 00:11:02.865 "data_offset": 0, 00:11:02.865 "data_size": 65536 00:11:02.865 }, 00:11:02.865 { 00:11:02.865 "name": "BaseBdev2", 00:11:02.866 "uuid": "8ee2dffe-517c-42df-81ef-ff294b48d9a6", 00:11:02.866 "is_configured": true, 00:11:02.866 "data_offset": 0, 00:11:02.866 "data_size": 65536 00:11:02.866 }, 00:11:02.866 { 00:11:02.866 "name": "BaseBdev3", 00:11:02.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.866 "is_configured": false, 00:11:02.866 "data_offset": 0, 00:11:02.866 "data_size": 0 00:11:02.866 }, 00:11:02.866 { 00:11:02.866 "name": "BaseBdev4", 00:11:02.866 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:02.866 "is_configured": false, 00:11:02.866 "data_offset": 0, 00:11:02.866 "data_size": 0 00:11:02.866 } 00:11:02.866 ] 00:11:02.866 }' 00:11:02.866 03:13:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:02.866 03:13:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.124 03:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:03.124 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.124 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.124 [2024-10-09 03:13:46.382138] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:03.124 BaseBdev3 00:11:03.124 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.124 03:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:03.124 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:03.124 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:03.124 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:03.124 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:03.124 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:03.124 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:03.124 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.124 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.124 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.124 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:03.124 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.124 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.124 [ 00:11:03.124 { 00:11:03.124 "name": "BaseBdev3", 00:11:03.124 "aliases": [ 00:11:03.124 "ff8a5bd9-60b6-48fd-8c42-09bd2558368f" 00:11:03.124 ], 00:11:03.124 "product_name": "Malloc disk", 00:11:03.124 "block_size": 512, 00:11:03.124 "num_blocks": 65536, 00:11:03.124 "uuid": "ff8a5bd9-60b6-48fd-8c42-09bd2558368f", 00:11:03.124 "assigned_rate_limits": { 00:11:03.124 "rw_ios_per_sec": 0, 00:11:03.124 "rw_mbytes_per_sec": 0, 00:11:03.124 "r_mbytes_per_sec": 0, 00:11:03.124 "w_mbytes_per_sec": 0 00:11:03.124 }, 00:11:03.124 "claimed": true, 00:11:03.124 "claim_type": "exclusive_write", 00:11:03.124 "zoned": false, 00:11:03.124 "supported_io_types": { 00:11:03.124 "read": true, 00:11:03.124 "write": true, 00:11:03.124 "unmap": true, 00:11:03.124 "flush": true, 00:11:03.124 "reset": true, 00:11:03.124 "nvme_admin": false, 00:11:03.124 "nvme_io": false, 00:11:03.124 "nvme_io_md": false, 00:11:03.124 "write_zeroes": true, 00:11:03.124 "zcopy": true, 00:11:03.124 "get_zone_info": false, 00:11:03.124 "zone_management": false, 00:11:03.124 "zone_append": false, 00:11:03.124 "compare": false, 00:11:03.124 "compare_and_write": false, 00:11:03.124 "abort": true, 00:11:03.124 "seek_hole": false, 00:11:03.124 "seek_data": false, 00:11:03.124 "copy": true, 00:11:03.124 "nvme_iov_md": false 00:11:03.124 }, 00:11:03.124 "memory_domains": [ 00:11:03.124 { 00:11:03.124 "dma_device_id": "system", 00:11:03.124 "dma_device_type": 1 00:11:03.124 }, 00:11:03.124 { 00:11:03.124 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.124 "dma_device_type": 2 00:11:03.124 } 00:11:03.124 ], 00:11:03.124 "driver_specific": {} 00:11:03.124 } 00:11:03.124 ] 00:11:03.124 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.124 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:03.124 03:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:03.124 03:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:03.124 03:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:03.124 03:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.124 03:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:03.124 03:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:03.124 03:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.124 03:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.124 03:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.124 03:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.124 03:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.383 03:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.383 03:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.383 03:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.383 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.383 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.383 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.383 03:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.383 "name": "Existed_Raid", 00:11:03.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.383 "strip_size_kb": 64, 00:11:03.383 "state": "configuring", 00:11:03.383 "raid_level": "raid0", 00:11:03.383 "superblock": false, 00:11:03.383 "num_base_bdevs": 4, 00:11:03.383 "num_base_bdevs_discovered": 3, 00:11:03.383 "num_base_bdevs_operational": 4, 00:11:03.383 "base_bdevs_list": [ 00:11:03.383 { 00:11:03.383 "name": "BaseBdev1", 00:11:03.383 "uuid": "b5ee4e31-383a-4bc5-b3ea-594c5d559ede", 00:11:03.383 "is_configured": true, 00:11:03.383 "data_offset": 0, 00:11:03.383 "data_size": 65536 00:11:03.383 }, 00:11:03.383 { 00:11:03.383 "name": "BaseBdev2", 00:11:03.383 "uuid": "8ee2dffe-517c-42df-81ef-ff294b48d9a6", 00:11:03.383 "is_configured": true, 00:11:03.383 "data_offset": 0, 00:11:03.383 "data_size": 65536 00:11:03.383 }, 00:11:03.383 { 00:11:03.383 "name": "BaseBdev3", 00:11:03.383 "uuid": "ff8a5bd9-60b6-48fd-8c42-09bd2558368f", 00:11:03.383 "is_configured": true, 00:11:03.383 "data_offset": 0, 00:11:03.383 "data_size": 65536 00:11:03.383 }, 00:11:03.383 { 00:11:03.383 "name": "BaseBdev4", 00:11:03.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:03.383 "is_configured": false, 00:11:03.383 "data_offset": 0, 00:11:03.383 "data_size": 0 00:11:03.383 } 00:11:03.383 ] 00:11:03.383 }' 00:11:03.383 03:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.383 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.643 03:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:03.643 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.643 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.643 [2024-10-09 03:13:46.906676] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:03.643 [2024-10-09 03:13:46.906801] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:03.643 [2024-10-09 03:13:46.906818] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:03.643 [2024-10-09 03:13:46.907124] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:03.643 [2024-10-09 03:13:46.907304] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:03.643 [2024-10-09 03:13:46.907320] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:03.643 [2024-10-09 03:13:46.907595] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:03.643 BaseBdev4 00:11:03.643 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.643 03:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:03.643 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:03.643 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:03.643 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:03.643 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:03.643 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:03.643 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:03.643 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.643 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.643 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.643 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:03.643 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.643 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.643 [ 00:11:03.643 { 00:11:03.643 "name": "BaseBdev4", 00:11:03.643 "aliases": [ 00:11:03.643 "02dcb0cd-5803-458b-b46e-9816f3499324" 00:11:03.643 ], 00:11:03.643 "product_name": "Malloc disk", 00:11:03.643 "block_size": 512, 00:11:03.643 "num_blocks": 65536, 00:11:03.643 "uuid": "02dcb0cd-5803-458b-b46e-9816f3499324", 00:11:03.643 "assigned_rate_limits": { 00:11:03.643 "rw_ios_per_sec": 0, 00:11:03.643 "rw_mbytes_per_sec": 0, 00:11:03.643 "r_mbytes_per_sec": 0, 00:11:03.643 "w_mbytes_per_sec": 0 00:11:03.643 }, 00:11:03.643 "claimed": true, 00:11:03.643 "claim_type": "exclusive_write", 00:11:03.643 "zoned": false, 00:11:03.643 "supported_io_types": { 00:11:03.643 "read": true, 00:11:03.643 "write": true, 00:11:03.643 "unmap": true, 00:11:03.643 "flush": true, 00:11:03.643 "reset": true, 00:11:03.643 "nvme_admin": false, 00:11:03.643 "nvme_io": false, 00:11:03.643 "nvme_io_md": false, 00:11:03.643 "write_zeroes": true, 00:11:03.643 "zcopy": true, 00:11:03.643 "get_zone_info": false, 00:11:03.643 "zone_management": false, 00:11:03.643 "zone_append": false, 00:11:03.643 "compare": false, 00:11:03.643 "compare_and_write": false, 00:11:03.643 "abort": true, 00:11:03.643 "seek_hole": false, 00:11:03.643 "seek_data": false, 00:11:03.643 "copy": true, 00:11:03.643 "nvme_iov_md": false 00:11:03.643 }, 00:11:03.643 "memory_domains": [ 00:11:03.643 { 00:11:03.643 "dma_device_id": "system", 00:11:03.643 "dma_device_type": 1 00:11:03.643 }, 00:11:03.643 { 00:11:03.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:03.643 "dma_device_type": 2 00:11:03.643 } 00:11:03.643 ], 00:11:03.643 "driver_specific": {} 00:11:03.643 } 00:11:03.643 ] 00:11:03.643 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.643 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:03.643 03:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:03.643 03:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:03.643 03:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:03.903 03:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:03.903 03:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:03.903 03:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:03.903 03:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:03.903 03:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:03.903 03:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:03.903 03:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:03.903 03:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:03.903 03:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:03.903 03:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.903 03:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:03.903 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.903 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:03.903 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.903 03:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:03.903 "name": "Existed_Raid", 00:11:03.903 "uuid": "6b130a4a-f1e1-41a4-8392-5534deb41157", 00:11:03.903 "strip_size_kb": 64, 00:11:03.903 "state": "online", 00:11:03.903 "raid_level": "raid0", 00:11:03.903 "superblock": false, 00:11:03.903 "num_base_bdevs": 4, 00:11:03.903 "num_base_bdevs_discovered": 4, 00:11:03.903 "num_base_bdevs_operational": 4, 00:11:03.903 "base_bdevs_list": [ 00:11:03.903 { 00:11:03.903 "name": "BaseBdev1", 00:11:03.903 "uuid": "b5ee4e31-383a-4bc5-b3ea-594c5d559ede", 00:11:03.903 "is_configured": true, 00:11:03.903 "data_offset": 0, 00:11:03.903 "data_size": 65536 00:11:03.903 }, 00:11:03.903 { 00:11:03.903 "name": "BaseBdev2", 00:11:03.903 "uuid": "8ee2dffe-517c-42df-81ef-ff294b48d9a6", 00:11:03.903 "is_configured": true, 00:11:03.903 "data_offset": 0, 00:11:03.903 "data_size": 65536 00:11:03.903 }, 00:11:03.903 { 00:11:03.903 "name": "BaseBdev3", 00:11:03.903 "uuid": "ff8a5bd9-60b6-48fd-8c42-09bd2558368f", 00:11:03.903 "is_configured": true, 00:11:03.903 "data_offset": 0, 00:11:03.903 "data_size": 65536 00:11:03.903 }, 00:11:03.903 { 00:11:03.903 "name": "BaseBdev4", 00:11:03.903 "uuid": "02dcb0cd-5803-458b-b46e-9816f3499324", 00:11:03.903 "is_configured": true, 00:11:03.903 "data_offset": 0, 00:11:03.903 "data_size": 65536 00:11:03.903 } 00:11:03.903 ] 00:11:03.903 }' 00:11:03.903 03:13:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:03.903 03:13:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.163 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:04.163 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:04.163 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:04.163 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:04.163 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:04.163 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:04.163 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:04.164 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:04.164 03:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.164 03:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.164 [2024-10-09 03:13:47.350309] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:04.164 03:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.164 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:04.164 "name": "Existed_Raid", 00:11:04.164 "aliases": [ 00:11:04.164 "6b130a4a-f1e1-41a4-8392-5534deb41157" 00:11:04.164 ], 00:11:04.164 "product_name": "Raid Volume", 00:11:04.164 "block_size": 512, 00:11:04.164 "num_blocks": 262144, 00:11:04.164 "uuid": "6b130a4a-f1e1-41a4-8392-5534deb41157", 00:11:04.164 "assigned_rate_limits": { 00:11:04.164 "rw_ios_per_sec": 0, 00:11:04.164 "rw_mbytes_per_sec": 0, 00:11:04.164 "r_mbytes_per_sec": 0, 00:11:04.164 "w_mbytes_per_sec": 0 00:11:04.164 }, 00:11:04.164 "claimed": false, 00:11:04.164 "zoned": false, 00:11:04.164 "supported_io_types": { 00:11:04.164 "read": true, 00:11:04.164 "write": true, 00:11:04.164 "unmap": true, 00:11:04.164 "flush": true, 00:11:04.164 "reset": true, 00:11:04.164 "nvme_admin": false, 00:11:04.164 "nvme_io": false, 00:11:04.164 "nvme_io_md": false, 00:11:04.164 "write_zeroes": true, 00:11:04.164 "zcopy": false, 00:11:04.164 "get_zone_info": false, 00:11:04.164 "zone_management": false, 00:11:04.164 "zone_append": false, 00:11:04.164 "compare": false, 00:11:04.164 "compare_and_write": false, 00:11:04.164 "abort": false, 00:11:04.164 "seek_hole": false, 00:11:04.164 "seek_data": false, 00:11:04.164 "copy": false, 00:11:04.164 "nvme_iov_md": false 00:11:04.164 }, 00:11:04.164 "memory_domains": [ 00:11:04.164 { 00:11:04.164 "dma_device_id": "system", 00:11:04.164 "dma_device_type": 1 00:11:04.164 }, 00:11:04.164 { 00:11:04.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.164 "dma_device_type": 2 00:11:04.164 }, 00:11:04.164 { 00:11:04.164 "dma_device_id": "system", 00:11:04.164 "dma_device_type": 1 00:11:04.164 }, 00:11:04.164 { 00:11:04.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.164 "dma_device_type": 2 00:11:04.164 }, 00:11:04.164 { 00:11:04.164 "dma_device_id": "system", 00:11:04.164 "dma_device_type": 1 00:11:04.164 }, 00:11:04.164 { 00:11:04.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.164 "dma_device_type": 2 00:11:04.164 }, 00:11:04.164 { 00:11:04.164 "dma_device_id": "system", 00:11:04.164 "dma_device_type": 1 00:11:04.164 }, 00:11:04.164 { 00:11:04.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:04.164 "dma_device_type": 2 00:11:04.164 } 00:11:04.164 ], 00:11:04.164 "driver_specific": { 00:11:04.164 "raid": { 00:11:04.164 "uuid": "6b130a4a-f1e1-41a4-8392-5534deb41157", 00:11:04.164 "strip_size_kb": 64, 00:11:04.164 "state": "online", 00:11:04.164 "raid_level": "raid0", 00:11:04.164 "superblock": false, 00:11:04.164 "num_base_bdevs": 4, 00:11:04.164 "num_base_bdevs_discovered": 4, 00:11:04.164 "num_base_bdevs_operational": 4, 00:11:04.164 "base_bdevs_list": [ 00:11:04.164 { 00:11:04.164 "name": "BaseBdev1", 00:11:04.164 "uuid": "b5ee4e31-383a-4bc5-b3ea-594c5d559ede", 00:11:04.164 "is_configured": true, 00:11:04.164 "data_offset": 0, 00:11:04.164 "data_size": 65536 00:11:04.164 }, 00:11:04.164 { 00:11:04.164 "name": "BaseBdev2", 00:11:04.164 "uuid": "8ee2dffe-517c-42df-81ef-ff294b48d9a6", 00:11:04.164 "is_configured": true, 00:11:04.164 "data_offset": 0, 00:11:04.164 "data_size": 65536 00:11:04.164 }, 00:11:04.164 { 00:11:04.164 "name": "BaseBdev3", 00:11:04.164 "uuid": "ff8a5bd9-60b6-48fd-8c42-09bd2558368f", 00:11:04.164 "is_configured": true, 00:11:04.164 "data_offset": 0, 00:11:04.164 "data_size": 65536 00:11:04.164 }, 00:11:04.164 { 00:11:04.164 "name": "BaseBdev4", 00:11:04.164 "uuid": "02dcb0cd-5803-458b-b46e-9816f3499324", 00:11:04.164 "is_configured": true, 00:11:04.164 "data_offset": 0, 00:11:04.164 "data_size": 65536 00:11:04.164 } 00:11:04.164 ] 00:11:04.164 } 00:11:04.164 } 00:11:04.164 }' 00:11:04.164 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:04.164 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:04.164 BaseBdev2 00:11:04.164 BaseBdev3 00:11:04.164 BaseBdev4' 00:11:04.164 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.425 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:04.425 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.425 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:04.425 03:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.425 03:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.425 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.425 03:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.425 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.425 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.425 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.425 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:04.425 03:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.425 03:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.425 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.425 03:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.425 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.425 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.425 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.425 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:04.425 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.425 03:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.425 03:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.425 03:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.425 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.425 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.425 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:04.425 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:04.425 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:04.425 03:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.425 03:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.425 03:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.425 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:04.425 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:04.425 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:04.425 03:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.425 03:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.425 [2024-10-09 03:13:47.685470] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:04.425 [2024-10-09 03:13:47.685505] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:04.425 [2024-10-09 03:13:47.685563] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:04.685 03:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.685 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:04.685 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:04.685 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:04.685 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:04.685 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:04.685 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:11:04.685 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:04.685 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:04.685 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:04.685 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.685 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:04.685 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.685 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.685 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.685 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.685 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.685 03:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.685 03:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.685 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.685 03:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.685 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.685 "name": "Existed_Raid", 00:11:04.685 "uuid": "6b130a4a-f1e1-41a4-8392-5534deb41157", 00:11:04.685 "strip_size_kb": 64, 00:11:04.685 "state": "offline", 00:11:04.685 "raid_level": "raid0", 00:11:04.685 "superblock": false, 00:11:04.685 "num_base_bdevs": 4, 00:11:04.685 "num_base_bdevs_discovered": 3, 00:11:04.685 "num_base_bdevs_operational": 3, 00:11:04.685 "base_bdevs_list": [ 00:11:04.685 { 00:11:04.685 "name": null, 00:11:04.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:04.685 "is_configured": false, 00:11:04.685 "data_offset": 0, 00:11:04.685 "data_size": 65536 00:11:04.685 }, 00:11:04.685 { 00:11:04.685 "name": "BaseBdev2", 00:11:04.685 "uuid": "8ee2dffe-517c-42df-81ef-ff294b48d9a6", 00:11:04.685 "is_configured": true, 00:11:04.685 "data_offset": 0, 00:11:04.685 "data_size": 65536 00:11:04.685 }, 00:11:04.685 { 00:11:04.685 "name": "BaseBdev3", 00:11:04.685 "uuid": "ff8a5bd9-60b6-48fd-8c42-09bd2558368f", 00:11:04.685 "is_configured": true, 00:11:04.685 "data_offset": 0, 00:11:04.685 "data_size": 65536 00:11:04.685 }, 00:11:04.685 { 00:11:04.685 "name": "BaseBdev4", 00:11:04.685 "uuid": "02dcb0cd-5803-458b-b46e-9816f3499324", 00:11:04.685 "is_configured": true, 00:11:04.685 "data_offset": 0, 00:11:04.685 "data_size": 65536 00:11:04.685 } 00:11:04.685 ] 00:11:04.685 }' 00:11:04.685 03:13:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.685 03:13:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.944 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:04.944 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:04.944 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:04.944 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.944 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.944 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.944 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:04.944 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:04.944 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:04.944 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:04.944 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:04.944 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.944 [2024-10-09 03:13:48.242861] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:05.203 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.203 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:05.203 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:05.203 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.203 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:05.203 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.203 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.203 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.203 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:05.203 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:05.203 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:05.203 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.203 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.203 [2024-10-09 03:13:48.400569] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:05.203 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.203 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:05.203 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:05.203 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.203 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.203 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.463 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:05.463 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.463 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:05.463 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:05.463 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:05.463 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.463 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.463 [2024-10-09 03:13:48.555087] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:05.463 [2024-10-09 03:13:48.555184] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:05.463 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.463 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:05.463 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:05.463 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:05.463 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.463 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.463 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.463 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.463 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:05.463 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:05.463 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:05.463 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:05.463 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:05.463 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:05.463 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.463 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.463 BaseBdev2 00:11:05.463 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.463 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:05.463 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:05.463 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:05.463 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:05.463 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:05.463 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:05.463 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:05.463 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.463 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.463 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.463 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:05.463 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.463 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.463 [ 00:11:05.463 { 00:11:05.463 "name": "BaseBdev2", 00:11:05.463 "aliases": [ 00:11:05.463 "f88442e8-5454-42db-88f4-c471fa58bee4" 00:11:05.463 ], 00:11:05.463 "product_name": "Malloc disk", 00:11:05.463 "block_size": 512, 00:11:05.463 "num_blocks": 65536, 00:11:05.463 "uuid": "f88442e8-5454-42db-88f4-c471fa58bee4", 00:11:05.463 "assigned_rate_limits": { 00:11:05.463 "rw_ios_per_sec": 0, 00:11:05.463 "rw_mbytes_per_sec": 0, 00:11:05.463 "r_mbytes_per_sec": 0, 00:11:05.463 "w_mbytes_per_sec": 0 00:11:05.463 }, 00:11:05.463 "claimed": false, 00:11:05.463 "zoned": false, 00:11:05.463 "supported_io_types": { 00:11:05.463 "read": true, 00:11:05.463 "write": true, 00:11:05.463 "unmap": true, 00:11:05.463 "flush": true, 00:11:05.463 "reset": true, 00:11:05.463 "nvme_admin": false, 00:11:05.463 "nvme_io": false, 00:11:05.463 "nvme_io_md": false, 00:11:05.463 "write_zeroes": true, 00:11:05.463 "zcopy": true, 00:11:05.463 "get_zone_info": false, 00:11:05.463 "zone_management": false, 00:11:05.463 "zone_append": false, 00:11:05.463 "compare": false, 00:11:05.463 "compare_and_write": false, 00:11:05.463 "abort": true, 00:11:05.463 "seek_hole": false, 00:11:05.463 "seek_data": false, 00:11:05.463 "copy": true, 00:11:05.723 "nvme_iov_md": false 00:11:05.723 }, 00:11:05.723 "memory_domains": [ 00:11:05.723 { 00:11:05.723 "dma_device_id": "system", 00:11:05.723 "dma_device_type": 1 00:11:05.723 }, 00:11:05.723 { 00:11:05.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.723 "dma_device_type": 2 00:11:05.723 } 00:11:05.723 ], 00:11:05.723 "driver_specific": {} 00:11:05.723 } 00:11:05.723 ] 00:11:05.723 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.723 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:05.723 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:05.723 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:05.723 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:05.723 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.723 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.723 BaseBdev3 00:11:05.723 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.723 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:05.723 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:05.723 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:05.723 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:05.723 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:05.723 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:05.723 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:05.723 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.723 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.723 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.723 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:05.723 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.723 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.723 [ 00:11:05.723 { 00:11:05.723 "name": "BaseBdev3", 00:11:05.723 "aliases": [ 00:11:05.723 "537b7c2b-cef6-4c9b-b93b-f56d97ddc114" 00:11:05.723 ], 00:11:05.723 "product_name": "Malloc disk", 00:11:05.723 "block_size": 512, 00:11:05.723 "num_blocks": 65536, 00:11:05.723 "uuid": "537b7c2b-cef6-4c9b-b93b-f56d97ddc114", 00:11:05.723 "assigned_rate_limits": { 00:11:05.723 "rw_ios_per_sec": 0, 00:11:05.723 "rw_mbytes_per_sec": 0, 00:11:05.723 "r_mbytes_per_sec": 0, 00:11:05.723 "w_mbytes_per_sec": 0 00:11:05.723 }, 00:11:05.723 "claimed": false, 00:11:05.723 "zoned": false, 00:11:05.723 "supported_io_types": { 00:11:05.723 "read": true, 00:11:05.723 "write": true, 00:11:05.723 "unmap": true, 00:11:05.723 "flush": true, 00:11:05.723 "reset": true, 00:11:05.723 "nvme_admin": false, 00:11:05.723 "nvme_io": false, 00:11:05.723 "nvme_io_md": false, 00:11:05.723 "write_zeroes": true, 00:11:05.723 "zcopy": true, 00:11:05.723 "get_zone_info": false, 00:11:05.723 "zone_management": false, 00:11:05.723 "zone_append": false, 00:11:05.723 "compare": false, 00:11:05.723 "compare_and_write": false, 00:11:05.723 "abort": true, 00:11:05.723 "seek_hole": false, 00:11:05.723 "seek_data": false, 00:11:05.723 "copy": true, 00:11:05.723 "nvme_iov_md": false 00:11:05.723 }, 00:11:05.723 "memory_domains": [ 00:11:05.723 { 00:11:05.723 "dma_device_id": "system", 00:11:05.723 "dma_device_type": 1 00:11:05.723 }, 00:11:05.723 { 00:11:05.723 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.723 "dma_device_type": 2 00:11:05.723 } 00:11:05.723 ], 00:11:05.723 "driver_specific": {} 00:11:05.723 } 00:11:05.723 ] 00:11:05.723 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.723 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:05.723 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:05.723 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:05.723 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:05.723 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.723 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.723 BaseBdev4 00:11:05.723 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.723 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:05.724 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:05.724 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:05.724 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:05.724 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:05.724 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:05.724 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:05.724 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.724 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.724 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.724 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:05.724 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.724 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.724 [ 00:11:05.724 { 00:11:05.724 "name": "BaseBdev4", 00:11:05.724 "aliases": [ 00:11:05.724 "fc052b21-e79d-44c5-810b-813bceb0dbb8" 00:11:05.724 ], 00:11:05.724 "product_name": "Malloc disk", 00:11:05.724 "block_size": 512, 00:11:05.724 "num_blocks": 65536, 00:11:05.724 "uuid": "fc052b21-e79d-44c5-810b-813bceb0dbb8", 00:11:05.724 "assigned_rate_limits": { 00:11:05.724 "rw_ios_per_sec": 0, 00:11:05.724 "rw_mbytes_per_sec": 0, 00:11:05.724 "r_mbytes_per_sec": 0, 00:11:05.724 "w_mbytes_per_sec": 0 00:11:05.724 }, 00:11:05.724 "claimed": false, 00:11:05.724 "zoned": false, 00:11:05.724 "supported_io_types": { 00:11:05.724 "read": true, 00:11:05.724 "write": true, 00:11:05.724 "unmap": true, 00:11:05.724 "flush": true, 00:11:05.724 "reset": true, 00:11:05.724 "nvme_admin": false, 00:11:05.724 "nvme_io": false, 00:11:05.724 "nvme_io_md": false, 00:11:05.724 "write_zeroes": true, 00:11:05.724 "zcopy": true, 00:11:05.724 "get_zone_info": false, 00:11:05.724 "zone_management": false, 00:11:05.724 "zone_append": false, 00:11:05.724 "compare": false, 00:11:05.724 "compare_and_write": false, 00:11:05.724 "abort": true, 00:11:05.724 "seek_hole": false, 00:11:05.724 "seek_data": false, 00:11:05.724 "copy": true, 00:11:05.724 "nvme_iov_md": false 00:11:05.724 }, 00:11:05.724 "memory_domains": [ 00:11:05.724 { 00:11:05.724 "dma_device_id": "system", 00:11:05.724 "dma_device_type": 1 00:11:05.724 }, 00:11:05.724 { 00:11:05.724 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:05.724 "dma_device_type": 2 00:11:05.724 } 00:11:05.724 ], 00:11:05.724 "driver_specific": {} 00:11:05.724 } 00:11:05.724 ] 00:11:05.724 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.724 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:05.724 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:05.724 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:05.724 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:05.724 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.724 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.724 [2024-10-09 03:13:48.944135] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:05.724 [2024-10-09 03:13:48.944223] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:05.724 [2024-10-09 03:13:48.944267] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:05.724 [2024-10-09 03:13:48.946267] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:05.724 [2024-10-09 03:13:48.946361] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:05.724 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.724 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:05.724 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:05.724 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:05.724 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:05.724 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.724 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:05.724 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.724 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.724 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.724 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.724 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.724 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.724 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.724 03:13:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.724 03:13:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.724 03:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.724 "name": "Existed_Raid", 00:11:05.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.724 "strip_size_kb": 64, 00:11:05.724 "state": "configuring", 00:11:05.724 "raid_level": "raid0", 00:11:05.724 "superblock": false, 00:11:05.724 "num_base_bdevs": 4, 00:11:05.724 "num_base_bdevs_discovered": 3, 00:11:05.724 "num_base_bdevs_operational": 4, 00:11:05.724 "base_bdevs_list": [ 00:11:05.724 { 00:11:05.724 "name": "BaseBdev1", 00:11:05.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:05.724 "is_configured": false, 00:11:05.724 "data_offset": 0, 00:11:05.724 "data_size": 0 00:11:05.724 }, 00:11:05.724 { 00:11:05.724 "name": "BaseBdev2", 00:11:05.724 "uuid": "f88442e8-5454-42db-88f4-c471fa58bee4", 00:11:05.724 "is_configured": true, 00:11:05.724 "data_offset": 0, 00:11:05.724 "data_size": 65536 00:11:05.724 }, 00:11:05.724 { 00:11:05.724 "name": "BaseBdev3", 00:11:05.724 "uuid": "537b7c2b-cef6-4c9b-b93b-f56d97ddc114", 00:11:05.724 "is_configured": true, 00:11:05.724 "data_offset": 0, 00:11:05.724 "data_size": 65536 00:11:05.724 }, 00:11:05.724 { 00:11:05.724 "name": "BaseBdev4", 00:11:05.724 "uuid": "fc052b21-e79d-44c5-810b-813bceb0dbb8", 00:11:05.724 "is_configured": true, 00:11:05.724 "data_offset": 0, 00:11:05.724 "data_size": 65536 00:11:05.724 } 00:11:05.724 ] 00:11:05.724 }' 00:11:05.724 03:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.724 03:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.293 03:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:06.293 03:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.293 03:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.293 [2024-10-09 03:13:49.383393] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:06.293 03:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.293 03:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:06.293 03:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.293 03:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.293 03:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:06.293 03:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.293 03:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.293 03:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.293 03:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.293 03:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.293 03:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.293 03:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.293 03:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.293 03:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.293 03:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.293 03:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.293 03:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.293 "name": "Existed_Raid", 00:11:06.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.293 "strip_size_kb": 64, 00:11:06.293 "state": "configuring", 00:11:06.293 "raid_level": "raid0", 00:11:06.293 "superblock": false, 00:11:06.293 "num_base_bdevs": 4, 00:11:06.293 "num_base_bdevs_discovered": 2, 00:11:06.293 "num_base_bdevs_operational": 4, 00:11:06.293 "base_bdevs_list": [ 00:11:06.293 { 00:11:06.293 "name": "BaseBdev1", 00:11:06.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.293 "is_configured": false, 00:11:06.293 "data_offset": 0, 00:11:06.293 "data_size": 0 00:11:06.293 }, 00:11:06.293 { 00:11:06.293 "name": null, 00:11:06.293 "uuid": "f88442e8-5454-42db-88f4-c471fa58bee4", 00:11:06.293 "is_configured": false, 00:11:06.293 "data_offset": 0, 00:11:06.293 "data_size": 65536 00:11:06.293 }, 00:11:06.293 { 00:11:06.293 "name": "BaseBdev3", 00:11:06.293 "uuid": "537b7c2b-cef6-4c9b-b93b-f56d97ddc114", 00:11:06.293 "is_configured": true, 00:11:06.293 "data_offset": 0, 00:11:06.293 "data_size": 65536 00:11:06.293 }, 00:11:06.293 { 00:11:06.293 "name": "BaseBdev4", 00:11:06.293 "uuid": "fc052b21-e79d-44c5-810b-813bceb0dbb8", 00:11:06.293 "is_configured": true, 00:11:06.293 "data_offset": 0, 00:11:06.293 "data_size": 65536 00:11:06.293 } 00:11:06.293 ] 00:11:06.293 }' 00:11:06.293 03:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.293 03:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.552 03:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.552 03:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:06.552 03:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.552 03:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.552 03:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.812 [2024-10-09 03:13:49.907614] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:06.812 BaseBdev1 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.812 [ 00:11:06.812 { 00:11:06.812 "name": "BaseBdev1", 00:11:06.812 "aliases": [ 00:11:06.812 "e15a272e-c01b-4381-83f4-166e0de4d20d" 00:11:06.812 ], 00:11:06.812 "product_name": "Malloc disk", 00:11:06.812 "block_size": 512, 00:11:06.812 "num_blocks": 65536, 00:11:06.812 "uuid": "e15a272e-c01b-4381-83f4-166e0de4d20d", 00:11:06.812 "assigned_rate_limits": { 00:11:06.812 "rw_ios_per_sec": 0, 00:11:06.812 "rw_mbytes_per_sec": 0, 00:11:06.812 "r_mbytes_per_sec": 0, 00:11:06.812 "w_mbytes_per_sec": 0 00:11:06.812 }, 00:11:06.812 "claimed": true, 00:11:06.812 "claim_type": "exclusive_write", 00:11:06.812 "zoned": false, 00:11:06.812 "supported_io_types": { 00:11:06.812 "read": true, 00:11:06.812 "write": true, 00:11:06.812 "unmap": true, 00:11:06.812 "flush": true, 00:11:06.812 "reset": true, 00:11:06.812 "nvme_admin": false, 00:11:06.812 "nvme_io": false, 00:11:06.812 "nvme_io_md": false, 00:11:06.812 "write_zeroes": true, 00:11:06.812 "zcopy": true, 00:11:06.812 "get_zone_info": false, 00:11:06.812 "zone_management": false, 00:11:06.812 "zone_append": false, 00:11:06.812 "compare": false, 00:11:06.812 "compare_and_write": false, 00:11:06.812 "abort": true, 00:11:06.812 "seek_hole": false, 00:11:06.812 "seek_data": false, 00:11:06.812 "copy": true, 00:11:06.812 "nvme_iov_md": false 00:11:06.812 }, 00:11:06.812 "memory_domains": [ 00:11:06.812 { 00:11:06.812 "dma_device_id": "system", 00:11:06.812 "dma_device_type": 1 00:11:06.812 }, 00:11:06.812 { 00:11:06.812 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.812 "dma_device_type": 2 00:11:06.812 } 00:11:06.812 ], 00:11:06.812 "driver_specific": {} 00:11:06.812 } 00:11:06.812 ] 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:06.812 "name": "Existed_Raid", 00:11:06.812 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:06.812 "strip_size_kb": 64, 00:11:06.812 "state": "configuring", 00:11:06.812 "raid_level": "raid0", 00:11:06.812 "superblock": false, 00:11:06.812 "num_base_bdevs": 4, 00:11:06.812 "num_base_bdevs_discovered": 3, 00:11:06.812 "num_base_bdevs_operational": 4, 00:11:06.812 "base_bdevs_list": [ 00:11:06.812 { 00:11:06.812 "name": "BaseBdev1", 00:11:06.812 "uuid": "e15a272e-c01b-4381-83f4-166e0de4d20d", 00:11:06.812 "is_configured": true, 00:11:06.812 "data_offset": 0, 00:11:06.812 "data_size": 65536 00:11:06.812 }, 00:11:06.812 { 00:11:06.812 "name": null, 00:11:06.812 "uuid": "f88442e8-5454-42db-88f4-c471fa58bee4", 00:11:06.812 "is_configured": false, 00:11:06.812 "data_offset": 0, 00:11:06.812 "data_size": 65536 00:11:06.812 }, 00:11:06.812 { 00:11:06.812 "name": "BaseBdev3", 00:11:06.812 "uuid": "537b7c2b-cef6-4c9b-b93b-f56d97ddc114", 00:11:06.812 "is_configured": true, 00:11:06.812 "data_offset": 0, 00:11:06.812 "data_size": 65536 00:11:06.812 }, 00:11:06.812 { 00:11:06.812 "name": "BaseBdev4", 00:11:06.812 "uuid": "fc052b21-e79d-44c5-810b-813bceb0dbb8", 00:11:06.812 "is_configured": true, 00:11:06.812 "data_offset": 0, 00:11:06.812 "data_size": 65536 00:11:06.812 } 00:11:06.812 ] 00:11:06.812 }' 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:06.812 03:13:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.071 03:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.071 03:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:07.071 03:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.071 03:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.071 03:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.071 03:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:07.071 03:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:07.071 03:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.071 03:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.331 [2024-10-09 03:13:50.378951] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:07.331 03:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.331 03:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:07.331 03:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.331 03:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.331 03:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:07.331 03:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.331 03:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.331 03:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.331 03:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.331 03:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.331 03:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.331 03:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.331 03:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.331 03:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.331 03:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.331 03:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.331 03:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.331 "name": "Existed_Raid", 00:11:07.331 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.331 "strip_size_kb": 64, 00:11:07.331 "state": "configuring", 00:11:07.331 "raid_level": "raid0", 00:11:07.331 "superblock": false, 00:11:07.332 "num_base_bdevs": 4, 00:11:07.332 "num_base_bdevs_discovered": 2, 00:11:07.332 "num_base_bdevs_operational": 4, 00:11:07.332 "base_bdevs_list": [ 00:11:07.332 { 00:11:07.332 "name": "BaseBdev1", 00:11:07.332 "uuid": "e15a272e-c01b-4381-83f4-166e0de4d20d", 00:11:07.332 "is_configured": true, 00:11:07.332 "data_offset": 0, 00:11:07.332 "data_size": 65536 00:11:07.332 }, 00:11:07.332 { 00:11:07.332 "name": null, 00:11:07.332 "uuid": "f88442e8-5454-42db-88f4-c471fa58bee4", 00:11:07.332 "is_configured": false, 00:11:07.332 "data_offset": 0, 00:11:07.332 "data_size": 65536 00:11:07.332 }, 00:11:07.332 { 00:11:07.332 "name": null, 00:11:07.332 "uuid": "537b7c2b-cef6-4c9b-b93b-f56d97ddc114", 00:11:07.332 "is_configured": false, 00:11:07.332 "data_offset": 0, 00:11:07.332 "data_size": 65536 00:11:07.332 }, 00:11:07.332 { 00:11:07.332 "name": "BaseBdev4", 00:11:07.332 "uuid": "fc052b21-e79d-44c5-810b-813bceb0dbb8", 00:11:07.332 "is_configured": true, 00:11:07.332 "data_offset": 0, 00:11:07.332 "data_size": 65536 00:11:07.332 } 00:11:07.332 ] 00:11:07.332 }' 00:11:07.332 03:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.332 03:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.591 03:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.591 03:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.591 03:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.591 03:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:07.591 03:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.591 03:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:07.591 03:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:07.591 03:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.591 03:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.591 [2024-10-09 03:13:50.854188] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:07.591 03:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.591 03:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:07.591 03:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:07.591 03:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:07.591 03:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:07.591 03:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:07.591 03:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:07.591 03:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:07.591 03:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:07.591 03:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:07.591 03:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:07.592 03:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:07.592 03:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:07.592 03:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:07.592 03:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.592 03:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:07.851 03:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:07.851 "name": "Existed_Raid", 00:11:07.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:07.851 "strip_size_kb": 64, 00:11:07.851 "state": "configuring", 00:11:07.851 "raid_level": "raid0", 00:11:07.851 "superblock": false, 00:11:07.851 "num_base_bdevs": 4, 00:11:07.851 "num_base_bdevs_discovered": 3, 00:11:07.851 "num_base_bdevs_operational": 4, 00:11:07.851 "base_bdevs_list": [ 00:11:07.851 { 00:11:07.851 "name": "BaseBdev1", 00:11:07.851 "uuid": "e15a272e-c01b-4381-83f4-166e0de4d20d", 00:11:07.851 "is_configured": true, 00:11:07.851 "data_offset": 0, 00:11:07.851 "data_size": 65536 00:11:07.851 }, 00:11:07.851 { 00:11:07.851 "name": null, 00:11:07.851 "uuid": "f88442e8-5454-42db-88f4-c471fa58bee4", 00:11:07.851 "is_configured": false, 00:11:07.851 "data_offset": 0, 00:11:07.851 "data_size": 65536 00:11:07.851 }, 00:11:07.851 { 00:11:07.851 "name": "BaseBdev3", 00:11:07.851 "uuid": "537b7c2b-cef6-4c9b-b93b-f56d97ddc114", 00:11:07.851 "is_configured": true, 00:11:07.851 "data_offset": 0, 00:11:07.851 "data_size": 65536 00:11:07.851 }, 00:11:07.851 { 00:11:07.851 "name": "BaseBdev4", 00:11:07.851 "uuid": "fc052b21-e79d-44c5-810b-813bceb0dbb8", 00:11:07.851 "is_configured": true, 00:11:07.851 "data_offset": 0, 00:11:07.851 "data_size": 65536 00:11:07.851 } 00:11:07.851 ] 00:11:07.851 }' 00:11:07.851 03:13:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:07.851 03:13:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.111 03:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:08.111 03:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.111 03:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.111 03:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.111 03:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.111 03:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:08.111 03:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:08.111 03:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.111 03:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.111 [2024-10-09 03:13:51.345379] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:08.369 03:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.369 03:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:08.369 03:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.369 03:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.369 03:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:08.369 03:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.369 03:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.369 03:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.369 03:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.369 03:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.369 03:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.369 03:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.369 03:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.369 03:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.369 03:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.369 03:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.369 03:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.369 "name": "Existed_Raid", 00:11:08.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.369 "strip_size_kb": 64, 00:11:08.369 "state": "configuring", 00:11:08.369 "raid_level": "raid0", 00:11:08.369 "superblock": false, 00:11:08.369 "num_base_bdevs": 4, 00:11:08.369 "num_base_bdevs_discovered": 2, 00:11:08.369 "num_base_bdevs_operational": 4, 00:11:08.369 "base_bdevs_list": [ 00:11:08.369 { 00:11:08.369 "name": null, 00:11:08.369 "uuid": "e15a272e-c01b-4381-83f4-166e0de4d20d", 00:11:08.369 "is_configured": false, 00:11:08.369 "data_offset": 0, 00:11:08.369 "data_size": 65536 00:11:08.369 }, 00:11:08.369 { 00:11:08.369 "name": null, 00:11:08.369 "uuid": "f88442e8-5454-42db-88f4-c471fa58bee4", 00:11:08.369 "is_configured": false, 00:11:08.369 "data_offset": 0, 00:11:08.369 "data_size": 65536 00:11:08.369 }, 00:11:08.369 { 00:11:08.369 "name": "BaseBdev3", 00:11:08.369 "uuid": "537b7c2b-cef6-4c9b-b93b-f56d97ddc114", 00:11:08.369 "is_configured": true, 00:11:08.369 "data_offset": 0, 00:11:08.369 "data_size": 65536 00:11:08.369 }, 00:11:08.369 { 00:11:08.369 "name": "BaseBdev4", 00:11:08.369 "uuid": "fc052b21-e79d-44c5-810b-813bceb0dbb8", 00:11:08.369 "is_configured": true, 00:11:08.369 "data_offset": 0, 00:11:08.369 "data_size": 65536 00:11:08.369 } 00:11:08.369 ] 00:11:08.369 }' 00:11:08.369 03:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.369 03:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.628 03:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:08.628 03:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.628 03:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.628 03:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.887 03:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.887 03:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:08.887 03:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:08.887 03:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.887 03:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.887 [2024-10-09 03:13:51.954712] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:08.887 03:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.887 03:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:08.887 03:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:08.887 03:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:08.887 03:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:08.887 03:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.887 03:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:08.887 03:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.887 03:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.887 03:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.887 03:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.887 03:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.887 03:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.887 03:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.887 03:13:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:08.887 03:13:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.887 03:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:08.887 "name": "Existed_Raid", 00:11:08.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:08.887 "strip_size_kb": 64, 00:11:08.887 "state": "configuring", 00:11:08.887 "raid_level": "raid0", 00:11:08.887 "superblock": false, 00:11:08.887 "num_base_bdevs": 4, 00:11:08.887 "num_base_bdevs_discovered": 3, 00:11:08.887 "num_base_bdevs_operational": 4, 00:11:08.887 "base_bdevs_list": [ 00:11:08.887 { 00:11:08.887 "name": null, 00:11:08.887 "uuid": "e15a272e-c01b-4381-83f4-166e0de4d20d", 00:11:08.887 "is_configured": false, 00:11:08.887 "data_offset": 0, 00:11:08.887 "data_size": 65536 00:11:08.887 }, 00:11:08.887 { 00:11:08.887 "name": "BaseBdev2", 00:11:08.887 "uuid": "f88442e8-5454-42db-88f4-c471fa58bee4", 00:11:08.887 "is_configured": true, 00:11:08.887 "data_offset": 0, 00:11:08.887 "data_size": 65536 00:11:08.887 }, 00:11:08.887 { 00:11:08.887 "name": "BaseBdev3", 00:11:08.887 "uuid": "537b7c2b-cef6-4c9b-b93b-f56d97ddc114", 00:11:08.887 "is_configured": true, 00:11:08.887 "data_offset": 0, 00:11:08.887 "data_size": 65536 00:11:08.887 }, 00:11:08.887 { 00:11:08.887 "name": "BaseBdev4", 00:11:08.887 "uuid": "fc052b21-e79d-44c5-810b-813bceb0dbb8", 00:11:08.887 "is_configured": true, 00:11:08.887 "data_offset": 0, 00:11:08.887 "data_size": 65536 00:11:08.887 } 00:11:08.887 ] 00:11:08.887 }' 00:11:08.887 03:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:08.887 03:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.144 03:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:09.144 03:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.144 03:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.144 03:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.402 03:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.402 03:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:09.402 03:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.402 03:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:09.402 03:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.402 03:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.402 03:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.402 03:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e15a272e-c01b-4381-83f4-166e0de4d20d 00:11:09.402 03:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.402 03:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.402 [2024-10-09 03:13:52.581775] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:09.402 [2024-10-09 03:13:52.581832] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:09.402 [2024-10-09 03:13:52.581858] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:09.402 [2024-10-09 03:13:52.582138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:09.402 [2024-10-09 03:13:52.582317] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:09.402 [2024-10-09 03:13:52.582339] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:09.402 [2024-10-09 03:13:52.582610] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:09.402 NewBaseBdev 00:11:09.402 03:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.402 03:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:09.402 03:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:09.402 03:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:09.402 03:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:09.402 03:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:09.402 03:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:09.402 03:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:09.403 03:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.403 03:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.403 03:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.403 03:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:09.403 03:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.403 03:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.403 [ 00:11:09.403 { 00:11:09.403 "name": "NewBaseBdev", 00:11:09.403 "aliases": [ 00:11:09.403 "e15a272e-c01b-4381-83f4-166e0de4d20d" 00:11:09.403 ], 00:11:09.403 "product_name": "Malloc disk", 00:11:09.403 "block_size": 512, 00:11:09.403 "num_blocks": 65536, 00:11:09.403 "uuid": "e15a272e-c01b-4381-83f4-166e0de4d20d", 00:11:09.403 "assigned_rate_limits": { 00:11:09.403 "rw_ios_per_sec": 0, 00:11:09.403 "rw_mbytes_per_sec": 0, 00:11:09.403 "r_mbytes_per_sec": 0, 00:11:09.403 "w_mbytes_per_sec": 0 00:11:09.403 }, 00:11:09.403 "claimed": true, 00:11:09.403 "claim_type": "exclusive_write", 00:11:09.403 "zoned": false, 00:11:09.403 "supported_io_types": { 00:11:09.403 "read": true, 00:11:09.403 "write": true, 00:11:09.403 "unmap": true, 00:11:09.403 "flush": true, 00:11:09.403 "reset": true, 00:11:09.403 "nvme_admin": false, 00:11:09.403 "nvme_io": false, 00:11:09.403 "nvme_io_md": false, 00:11:09.403 "write_zeroes": true, 00:11:09.403 "zcopy": true, 00:11:09.403 "get_zone_info": false, 00:11:09.403 "zone_management": false, 00:11:09.403 "zone_append": false, 00:11:09.403 "compare": false, 00:11:09.403 "compare_and_write": false, 00:11:09.403 "abort": true, 00:11:09.403 "seek_hole": false, 00:11:09.403 "seek_data": false, 00:11:09.403 "copy": true, 00:11:09.403 "nvme_iov_md": false 00:11:09.403 }, 00:11:09.403 "memory_domains": [ 00:11:09.403 { 00:11:09.403 "dma_device_id": "system", 00:11:09.403 "dma_device_type": 1 00:11:09.403 }, 00:11:09.403 { 00:11:09.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.403 "dma_device_type": 2 00:11:09.403 } 00:11:09.403 ], 00:11:09.403 "driver_specific": {} 00:11:09.403 } 00:11:09.403 ] 00:11:09.403 03:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.403 03:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:09.403 03:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:09.403 03:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:09.403 03:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:09.403 03:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:09.403 03:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:09.403 03:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:09.403 03:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:09.403 03:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:09.403 03:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:09.403 03:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:09.403 03:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:09.403 03:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:09.403 03:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.403 03:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.403 03:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.403 03:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.403 "name": "Existed_Raid", 00:11:09.403 "uuid": "0e6cd379-d398-4ad1-9241-fa7cc70809ef", 00:11:09.403 "strip_size_kb": 64, 00:11:09.403 "state": "online", 00:11:09.403 "raid_level": "raid0", 00:11:09.403 "superblock": false, 00:11:09.403 "num_base_bdevs": 4, 00:11:09.403 "num_base_bdevs_discovered": 4, 00:11:09.403 "num_base_bdevs_operational": 4, 00:11:09.403 "base_bdevs_list": [ 00:11:09.403 { 00:11:09.403 "name": "NewBaseBdev", 00:11:09.403 "uuid": "e15a272e-c01b-4381-83f4-166e0de4d20d", 00:11:09.403 "is_configured": true, 00:11:09.403 "data_offset": 0, 00:11:09.403 "data_size": 65536 00:11:09.403 }, 00:11:09.403 { 00:11:09.403 "name": "BaseBdev2", 00:11:09.403 "uuid": "f88442e8-5454-42db-88f4-c471fa58bee4", 00:11:09.403 "is_configured": true, 00:11:09.403 "data_offset": 0, 00:11:09.403 "data_size": 65536 00:11:09.403 }, 00:11:09.403 { 00:11:09.403 "name": "BaseBdev3", 00:11:09.403 "uuid": "537b7c2b-cef6-4c9b-b93b-f56d97ddc114", 00:11:09.403 "is_configured": true, 00:11:09.403 "data_offset": 0, 00:11:09.403 "data_size": 65536 00:11:09.403 }, 00:11:09.403 { 00:11:09.403 "name": "BaseBdev4", 00:11:09.403 "uuid": "fc052b21-e79d-44c5-810b-813bceb0dbb8", 00:11:09.403 "is_configured": true, 00:11:09.403 "data_offset": 0, 00:11:09.403 "data_size": 65536 00:11:09.403 } 00:11:09.403 ] 00:11:09.403 }' 00:11:09.403 03:13:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.403 03:13:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.978 03:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:09.978 03:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:09.978 03:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:09.978 03:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:09.978 03:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:09.978 03:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:09.978 03:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:09.978 03:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:09.978 03:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.978 03:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.978 [2024-10-09 03:13:53.125408] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:09.978 03:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:09.978 03:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:09.978 "name": "Existed_Raid", 00:11:09.978 "aliases": [ 00:11:09.978 "0e6cd379-d398-4ad1-9241-fa7cc70809ef" 00:11:09.978 ], 00:11:09.978 "product_name": "Raid Volume", 00:11:09.978 "block_size": 512, 00:11:09.978 "num_blocks": 262144, 00:11:09.978 "uuid": "0e6cd379-d398-4ad1-9241-fa7cc70809ef", 00:11:09.978 "assigned_rate_limits": { 00:11:09.978 "rw_ios_per_sec": 0, 00:11:09.978 "rw_mbytes_per_sec": 0, 00:11:09.978 "r_mbytes_per_sec": 0, 00:11:09.978 "w_mbytes_per_sec": 0 00:11:09.978 }, 00:11:09.978 "claimed": false, 00:11:09.978 "zoned": false, 00:11:09.978 "supported_io_types": { 00:11:09.978 "read": true, 00:11:09.978 "write": true, 00:11:09.978 "unmap": true, 00:11:09.978 "flush": true, 00:11:09.978 "reset": true, 00:11:09.978 "nvme_admin": false, 00:11:09.978 "nvme_io": false, 00:11:09.978 "nvme_io_md": false, 00:11:09.978 "write_zeroes": true, 00:11:09.978 "zcopy": false, 00:11:09.978 "get_zone_info": false, 00:11:09.978 "zone_management": false, 00:11:09.978 "zone_append": false, 00:11:09.978 "compare": false, 00:11:09.978 "compare_and_write": false, 00:11:09.978 "abort": false, 00:11:09.978 "seek_hole": false, 00:11:09.978 "seek_data": false, 00:11:09.978 "copy": false, 00:11:09.978 "nvme_iov_md": false 00:11:09.978 }, 00:11:09.978 "memory_domains": [ 00:11:09.978 { 00:11:09.978 "dma_device_id": "system", 00:11:09.978 "dma_device_type": 1 00:11:09.978 }, 00:11:09.978 { 00:11:09.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.978 "dma_device_type": 2 00:11:09.978 }, 00:11:09.978 { 00:11:09.978 "dma_device_id": "system", 00:11:09.978 "dma_device_type": 1 00:11:09.978 }, 00:11:09.978 { 00:11:09.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.978 "dma_device_type": 2 00:11:09.978 }, 00:11:09.978 { 00:11:09.978 "dma_device_id": "system", 00:11:09.978 "dma_device_type": 1 00:11:09.978 }, 00:11:09.978 { 00:11:09.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.978 "dma_device_type": 2 00:11:09.978 }, 00:11:09.978 { 00:11:09.978 "dma_device_id": "system", 00:11:09.978 "dma_device_type": 1 00:11:09.978 }, 00:11:09.978 { 00:11:09.978 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:09.978 "dma_device_type": 2 00:11:09.978 } 00:11:09.978 ], 00:11:09.978 "driver_specific": { 00:11:09.978 "raid": { 00:11:09.978 "uuid": "0e6cd379-d398-4ad1-9241-fa7cc70809ef", 00:11:09.978 "strip_size_kb": 64, 00:11:09.978 "state": "online", 00:11:09.978 "raid_level": "raid0", 00:11:09.978 "superblock": false, 00:11:09.979 "num_base_bdevs": 4, 00:11:09.979 "num_base_bdevs_discovered": 4, 00:11:09.979 "num_base_bdevs_operational": 4, 00:11:09.979 "base_bdevs_list": [ 00:11:09.979 { 00:11:09.979 "name": "NewBaseBdev", 00:11:09.979 "uuid": "e15a272e-c01b-4381-83f4-166e0de4d20d", 00:11:09.979 "is_configured": true, 00:11:09.979 "data_offset": 0, 00:11:09.979 "data_size": 65536 00:11:09.979 }, 00:11:09.979 { 00:11:09.979 "name": "BaseBdev2", 00:11:09.979 "uuid": "f88442e8-5454-42db-88f4-c471fa58bee4", 00:11:09.979 "is_configured": true, 00:11:09.979 "data_offset": 0, 00:11:09.979 "data_size": 65536 00:11:09.979 }, 00:11:09.979 { 00:11:09.979 "name": "BaseBdev3", 00:11:09.979 "uuid": "537b7c2b-cef6-4c9b-b93b-f56d97ddc114", 00:11:09.979 "is_configured": true, 00:11:09.979 "data_offset": 0, 00:11:09.979 "data_size": 65536 00:11:09.979 }, 00:11:09.979 { 00:11:09.979 "name": "BaseBdev4", 00:11:09.979 "uuid": "fc052b21-e79d-44c5-810b-813bceb0dbb8", 00:11:09.979 "is_configured": true, 00:11:09.979 "data_offset": 0, 00:11:09.979 "data_size": 65536 00:11:09.979 } 00:11:09.979 ] 00:11:09.979 } 00:11:09.979 } 00:11:09.979 }' 00:11:09.979 03:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:09.979 03:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:09.979 BaseBdev2 00:11:09.979 BaseBdev3 00:11:09.979 BaseBdev4' 00:11:09.979 03:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.979 03:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:09.979 03:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:09.979 03:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:09.979 03:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:09.979 03:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:09.979 03:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.979 03:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.237 03:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.237 03:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.237 03:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.237 03:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:10.237 03:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.237 03:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.237 03:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.237 03:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.237 03:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.237 03:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.237 03:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.237 03:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:10.237 03:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.237 03:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.237 03:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.237 03:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.237 03:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.237 03:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.237 03:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:10.237 03:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:10.237 03:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.237 03:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.237 03:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:10.237 03:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.237 03:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:10.237 03:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:10.237 03:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:10.237 03:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:10.237 03:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.237 [2024-10-09 03:13:53.448569] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:10.237 [2024-10-09 03:13:53.448606] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:10.237 [2024-10-09 03:13:53.448693] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:10.237 [2024-10-09 03:13:53.448764] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:10.237 [2024-10-09 03:13:53.448775] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:10.237 03:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:10.238 03:13:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69513 00:11:10.238 03:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 69513 ']' 00:11:10.238 03:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 69513 00:11:10.238 03:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:11:10.238 03:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:10.238 03:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 69513 00:11:10.238 03:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:10.238 03:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:10.238 killing process with pid 69513 00:11:10.238 03:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 69513' 00:11:10.238 03:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 69513 00:11:10.238 [2024-10-09 03:13:53.497399] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:10.238 03:13:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 69513 00:11:10.802 [2024-10-09 03:13:53.912616] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:12.181 00:11:12.181 real 0m11.872s 00:11:12.181 user 0m18.722s 00:11:12.181 sys 0m2.009s 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.181 ************************************ 00:11:12.181 END TEST raid_state_function_test 00:11:12.181 ************************************ 00:11:12.181 03:13:55 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:11:12.181 03:13:55 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:12.181 03:13:55 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:12.181 03:13:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:12.181 ************************************ 00:11:12.181 START TEST raid_state_function_test_sb 00:11:12.181 ************************************ 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid0 4 true 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70186 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:12.181 Process raid pid: 70186 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70186' 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70186 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 70186 ']' 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:12.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:12.181 03:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:12.181 [2024-10-09 03:13:55.414819] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:11:12.181 [2024-10-09 03:13:55.414948] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:12.441 [2024-10-09 03:13:55.581184] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.701 [2024-10-09 03:13:55.799712] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:12.960 [2024-10-09 03:13:56.017390] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:12.960 [2024-10-09 03:13:56.017437] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.219 03:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:13.219 03:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:13.219 03:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:13.219 03:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.219 03:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.219 [2024-10-09 03:13:56.280359] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:13.219 [2024-10-09 03:13:56.280413] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:13.219 [2024-10-09 03:13:56.280453] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:13.219 [2024-10-09 03:13:56.280465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:13.219 [2024-10-09 03:13:56.280473] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:13.219 [2024-10-09 03:13:56.280482] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:13.219 [2024-10-09 03:13:56.280490] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:13.219 [2024-10-09 03:13:56.280499] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:13.219 03:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.219 03:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:13.219 03:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.219 03:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.219 03:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:13.219 03:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.219 03:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.219 03:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.219 03:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.220 03:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.220 03:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.220 03:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.220 03:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.220 03:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.220 03:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.220 03:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.220 03:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.220 "name": "Existed_Raid", 00:11:13.220 "uuid": "f2ec09ae-f1d9-43b0-97f5-6043050f3992", 00:11:13.220 "strip_size_kb": 64, 00:11:13.220 "state": "configuring", 00:11:13.220 "raid_level": "raid0", 00:11:13.220 "superblock": true, 00:11:13.220 "num_base_bdevs": 4, 00:11:13.220 "num_base_bdevs_discovered": 0, 00:11:13.220 "num_base_bdevs_operational": 4, 00:11:13.220 "base_bdevs_list": [ 00:11:13.220 { 00:11:13.220 "name": "BaseBdev1", 00:11:13.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.220 "is_configured": false, 00:11:13.220 "data_offset": 0, 00:11:13.220 "data_size": 0 00:11:13.220 }, 00:11:13.220 { 00:11:13.220 "name": "BaseBdev2", 00:11:13.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.220 "is_configured": false, 00:11:13.220 "data_offset": 0, 00:11:13.220 "data_size": 0 00:11:13.220 }, 00:11:13.220 { 00:11:13.220 "name": "BaseBdev3", 00:11:13.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.220 "is_configured": false, 00:11:13.220 "data_offset": 0, 00:11:13.220 "data_size": 0 00:11:13.220 }, 00:11:13.220 { 00:11:13.220 "name": "BaseBdev4", 00:11:13.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.220 "is_configured": false, 00:11:13.220 "data_offset": 0, 00:11:13.220 "data_size": 0 00:11:13.220 } 00:11:13.220 ] 00:11:13.220 }' 00:11:13.220 03:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.220 03:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.790 [2024-10-09 03:13:56.795361] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:13.790 [2024-10-09 03:13:56.795407] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.790 [2024-10-09 03:13:56.807362] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:13.790 [2024-10-09 03:13:56.807404] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:13.790 [2024-10-09 03:13:56.807413] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:13.790 [2024-10-09 03:13:56.807422] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:13.790 [2024-10-09 03:13:56.807429] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:13.790 [2024-10-09 03:13:56.807437] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:13.790 [2024-10-09 03:13:56.807444] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:13.790 [2024-10-09 03:13:56.807452] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.790 [2024-10-09 03:13:56.869549] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:13.790 BaseBdev1 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.790 [ 00:11:13.790 { 00:11:13.790 "name": "BaseBdev1", 00:11:13.790 "aliases": [ 00:11:13.790 "534e2a2b-cb9c-4f26-8173-a466257d5fc2" 00:11:13.790 ], 00:11:13.790 "product_name": "Malloc disk", 00:11:13.790 "block_size": 512, 00:11:13.790 "num_blocks": 65536, 00:11:13.790 "uuid": "534e2a2b-cb9c-4f26-8173-a466257d5fc2", 00:11:13.790 "assigned_rate_limits": { 00:11:13.790 "rw_ios_per_sec": 0, 00:11:13.790 "rw_mbytes_per_sec": 0, 00:11:13.790 "r_mbytes_per_sec": 0, 00:11:13.790 "w_mbytes_per_sec": 0 00:11:13.790 }, 00:11:13.790 "claimed": true, 00:11:13.790 "claim_type": "exclusive_write", 00:11:13.790 "zoned": false, 00:11:13.790 "supported_io_types": { 00:11:13.790 "read": true, 00:11:13.790 "write": true, 00:11:13.790 "unmap": true, 00:11:13.790 "flush": true, 00:11:13.790 "reset": true, 00:11:13.790 "nvme_admin": false, 00:11:13.790 "nvme_io": false, 00:11:13.790 "nvme_io_md": false, 00:11:13.790 "write_zeroes": true, 00:11:13.790 "zcopy": true, 00:11:13.790 "get_zone_info": false, 00:11:13.790 "zone_management": false, 00:11:13.790 "zone_append": false, 00:11:13.790 "compare": false, 00:11:13.790 "compare_and_write": false, 00:11:13.790 "abort": true, 00:11:13.790 "seek_hole": false, 00:11:13.790 "seek_data": false, 00:11:13.790 "copy": true, 00:11:13.790 "nvme_iov_md": false 00:11:13.790 }, 00:11:13.790 "memory_domains": [ 00:11:13.790 { 00:11:13.790 "dma_device_id": "system", 00:11:13.790 "dma_device_type": 1 00:11:13.790 }, 00:11:13.790 { 00:11:13.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.790 "dma_device_type": 2 00:11:13.790 } 00:11:13.790 ], 00:11:13.790 "driver_specific": {} 00:11:13.790 } 00:11:13.790 ] 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:13.790 03:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.790 "name": "Existed_Raid", 00:11:13.790 "uuid": "ce9cb35f-c94b-4c45-93bc-eed25c501190", 00:11:13.790 "strip_size_kb": 64, 00:11:13.790 "state": "configuring", 00:11:13.790 "raid_level": "raid0", 00:11:13.790 "superblock": true, 00:11:13.790 "num_base_bdevs": 4, 00:11:13.790 "num_base_bdevs_discovered": 1, 00:11:13.790 "num_base_bdevs_operational": 4, 00:11:13.790 "base_bdevs_list": [ 00:11:13.790 { 00:11:13.790 "name": "BaseBdev1", 00:11:13.790 "uuid": "534e2a2b-cb9c-4f26-8173-a466257d5fc2", 00:11:13.790 "is_configured": true, 00:11:13.790 "data_offset": 2048, 00:11:13.790 "data_size": 63488 00:11:13.790 }, 00:11:13.790 { 00:11:13.790 "name": "BaseBdev2", 00:11:13.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.790 "is_configured": false, 00:11:13.790 "data_offset": 0, 00:11:13.790 "data_size": 0 00:11:13.790 }, 00:11:13.790 { 00:11:13.791 "name": "BaseBdev3", 00:11:13.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.791 "is_configured": false, 00:11:13.791 "data_offset": 0, 00:11:13.791 "data_size": 0 00:11:13.791 }, 00:11:13.791 { 00:11:13.791 "name": "BaseBdev4", 00:11:13.791 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.791 "is_configured": false, 00:11:13.791 "data_offset": 0, 00:11:13.791 "data_size": 0 00:11:13.791 } 00:11:13.791 ] 00:11:13.791 }' 00:11:13.791 03:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.791 03:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.359 03:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:14.359 03:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.359 03:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.359 [2024-10-09 03:13:57.384825] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:14.359 [2024-10-09 03:13:57.384966] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:14.359 03:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.359 03:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:14.359 03:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.359 03:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.359 [2024-10-09 03:13:57.396861] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:14.359 [2024-10-09 03:13:57.398818] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:14.359 [2024-10-09 03:13:57.398913] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:14.359 [2024-10-09 03:13:57.398949] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:14.359 [2024-10-09 03:13:57.398976] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:14.359 [2024-10-09 03:13:57.399004] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:14.359 [2024-10-09 03:13:57.399029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:14.359 03:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.359 03:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:14.359 03:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:14.359 03:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:14.359 03:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.359 03:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.359 03:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:14.359 03:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.359 03:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.359 03:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.359 03:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.359 03:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.359 03:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.359 03:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.359 03:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.359 03:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.359 03:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.359 03:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.359 03:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.359 "name": "Existed_Raid", 00:11:14.359 "uuid": "41f19086-a642-4a23-ae0e-e91ff75664b0", 00:11:14.359 "strip_size_kb": 64, 00:11:14.359 "state": "configuring", 00:11:14.359 "raid_level": "raid0", 00:11:14.359 "superblock": true, 00:11:14.359 "num_base_bdevs": 4, 00:11:14.359 "num_base_bdevs_discovered": 1, 00:11:14.359 "num_base_bdevs_operational": 4, 00:11:14.359 "base_bdevs_list": [ 00:11:14.359 { 00:11:14.359 "name": "BaseBdev1", 00:11:14.359 "uuid": "534e2a2b-cb9c-4f26-8173-a466257d5fc2", 00:11:14.359 "is_configured": true, 00:11:14.359 "data_offset": 2048, 00:11:14.359 "data_size": 63488 00:11:14.359 }, 00:11:14.359 { 00:11:14.359 "name": "BaseBdev2", 00:11:14.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.359 "is_configured": false, 00:11:14.359 "data_offset": 0, 00:11:14.359 "data_size": 0 00:11:14.359 }, 00:11:14.359 { 00:11:14.359 "name": "BaseBdev3", 00:11:14.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.359 "is_configured": false, 00:11:14.359 "data_offset": 0, 00:11:14.359 "data_size": 0 00:11:14.359 }, 00:11:14.359 { 00:11:14.359 "name": "BaseBdev4", 00:11:14.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.359 "is_configured": false, 00:11:14.359 "data_offset": 0, 00:11:14.359 "data_size": 0 00:11:14.359 } 00:11:14.359 ] 00:11:14.359 }' 00:11:14.359 03:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.359 03:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.619 03:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:14.619 03:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.619 03:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.619 [2024-10-09 03:13:57.900819] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:14.619 BaseBdev2 00:11:14.619 03:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.619 03:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:14.619 03:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:14.619 03:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:14.619 03:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:14.619 03:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:14.619 03:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:14.619 03:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:14.619 03:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.619 03:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.619 03:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.619 03:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:14.619 03:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.619 03:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.879 [ 00:11:14.879 { 00:11:14.879 "name": "BaseBdev2", 00:11:14.879 "aliases": [ 00:11:14.879 "1ca63ac7-0fac-4a2f-9f8d-dec565aed532" 00:11:14.879 ], 00:11:14.879 "product_name": "Malloc disk", 00:11:14.879 "block_size": 512, 00:11:14.879 "num_blocks": 65536, 00:11:14.879 "uuid": "1ca63ac7-0fac-4a2f-9f8d-dec565aed532", 00:11:14.879 "assigned_rate_limits": { 00:11:14.879 "rw_ios_per_sec": 0, 00:11:14.879 "rw_mbytes_per_sec": 0, 00:11:14.879 "r_mbytes_per_sec": 0, 00:11:14.879 "w_mbytes_per_sec": 0 00:11:14.879 }, 00:11:14.879 "claimed": true, 00:11:14.879 "claim_type": "exclusive_write", 00:11:14.879 "zoned": false, 00:11:14.879 "supported_io_types": { 00:11:14.879 "read": true, 00:11:14.879 "write": true, 00:11:14.879 "unmap": true, 00:11:14.879 "flush": true, 00:11:14.879 "reset": true, 00:11:14.879 "nvme_admin": false, 00:11:14.879 "nvme_io": false, 00:11:14.879 "nvme_io_md": false, 00:11:14.879 "write_zeroes": true, 00:11:14.879 "zcopy": true, 00:11:14.879 "get_zone_info": false, 00:11:14.879 "zone_management": false, 00:11:14.879 "zone_append": false, 00:11:14.879 "compare": false, 00:11:14.879 "compare_and_write": false, 00:11:14.879 "abort": true, 00:11:14.879 "seek_hole": false, 00:11:14.879 "seek_data": false, 00:11:14.879 "copy": true, 00:11:14.879 "nvme_iov_md": false 00:11:14.879 }, 00:11:14.879 "memory_domains": [ 00:11:14.879 { 00:11:14.879 "dma_device_id": "system", 00:11:14.879 "dma_device_type": 1 00:11:14.879 }, 00:11:14.879 { 00:11:14.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.879 "dma_device_type": 2 00:11:14.879 } 00:11:14.879 ], 00:11:14.879 "driver_specific": {} 00:11:14.879 } 00:11:14.879 ] 00:11:14.879 03:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.879 03:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:14.879 03:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:14.879 03:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:14.879 03:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:14.879 03:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.879 03:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.879 03:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:14.879 03:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:14.879 03:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:14.879 03:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.879 03:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.879 03:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.879 03:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.879 03:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.879 03:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.879 03:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.879 03:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:14.879 03:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.879 03:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.879 "name": "Existed_Raid", 00:11:14.879 "uuid": "41f19086-a642-4a23-ae0e-e91ff75664b0", 00:11:14.879 "strip_size_kb": 64, 00:11:14.879 "state": "configuring", 00:11:14.879 "raid_level": "raid0", 00:11:14.879 "superblock": true, 00:11:14.879 "num_base_bdevs": 4, 00:11:14.879 "num_base_bdevs_discovered": 2, 00:11:14.879 "num_base_bdevs_operational": 4, 00:11:14.879 "base_bdevs_list": [ 00:11:14.879 { 00:11:14.879 "name": "BaseBdev1", 00:11:14.879 "uuid": "534e2a2b-cb9c-4f26-8173-a466257d5fc2", 00:11:14.879 "is_configured": true, 00:11:14.879 "data_offset": 2048, 00:11:14.879 "data_size": 63488 00:11:14.879 }, 00:11:14.879 { 00:11:14.879 "name": "BaseBdev2", 00:11:14.879 "uuid": "1ca63ac7-0fac-4a2f-9f8d-dec565aed532", 00:11:14.879 "is_configured": true, 00:11:14.879 "data_offset": 2048, 00:11:14.879 "data_size": 63488 00:11:14.879 }, 00:11:14.879 { 00:11:14.879 "name": "BaseBdev3", 00:11:14.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.879 "is_configured": false, 00:11:14.879 "data_offset": 0, 00:11:14.879 "data_size": 0 00:11:14.879 }, 00:11:14.879 { 00:11:14.879 "name": "BaseBdev4", 00:11:14.879 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.879 "is_configured": false, 00:11:14.879 "data_offset": 0, 00:11:14.879 "data_size": 0 00:11:14.879 } 00:11:14.879 ] 00:11:14.879 }' 00:11:14.879 03:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.879 03:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.139 03:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:15.139 03:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.139 03:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.397 [2024-10-09 03:13:58.445349] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:15.397 BaseBdev3 00:11:15.397 03:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.397 03:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:15.397 03:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:15.397 03:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:15.397 03:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:15.398 03:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:15.398 03:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:15.398 03:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:15.398 03:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.398 03:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.398 03:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.398 03:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:15.398 03:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.398 03:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.398 [ 00:11:15.398 { 00:11:15.398 "name": "BaseBdev3", 00:11:15.398 "aliases": [ 00:11:15.398 "8a27df5a-c471-4747-a862-5a91ec0ed7f1" 00:11:15.398 ], 00:11:15.398 "product_name": "Malloc disk", 00:11:15.398 "block_size": 512, 00:11:15.398 "num_blocks": 65536, 00:11:15.398 "uuid": "8a27df5a-c471-4747-a862-5a91ec0ed7f1", 00:11:15.398 "assigned_rate_limits": { 00:11:15.398 "rw_ios_per_sec": 0, 00:11:15.398 "rw_mbytes_per_sec": 0, 00:11:15.398 "r_mbytes_per_sec": 0, 00:11:15.398 "w_mbytes_per_sec": 0 00:11:15.398 }, 00:11:15.398 "claimed": true, 00:11:15.398 "claim_type": "exclusive_write", 00:11:15.398 "zoned": false, 00:11:15.398 "supported_io_types": { 00:11:15.398 "read": true, 00:11:15.398 "write": true, 00:11:15.398 "unmap": true, 00:11:15.398 "flush": true, 00:11:15.398 "reset": true, 00:11:15.398 "nvme_admin": false, 00:11:15.398 "nvme_io": false, 00:11:15.398 "nvme_io_md": false, 00:11:15.398 "write_zeroes": true, 00:11:15.398 "zcopy": true, 00:11:15.398 "get_zone_info": false, 00:11:15.398 "zone_management": false, 00:11:15.398 "zone_append": false, 00:11:15.398 "compare": false, 00:11:15.398 "compare_and_write": false, 00:11:15.398 "abort": true, 00:11:15.398 "seek_hole": false, 00:11:15.398 "seek_data": false, 00:11:15.398 "copy": true, 00:11:15.398 "nvme_iov_md": false 00:11:15.398 }, 00:11:15.398 "memory_domains": [ 00:11:15.398 { 00:11:15.398 "dma_device_id": "system", 00:11:15.398 "dma_device_type": 1 00:11:15.398 }, 00:11:15.398 { 00:11:15.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.398 "dma_device_type": 2 00:11:15.398 } 00:11:15.398 ], 00:11:15.398 "driver_specific": {} 00:11:15.398 } 00:11:15.398 ] 00:11:15.398 03:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.398 03:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:15.398 03:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:15.398 03:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:15.398 03:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:15.398 03:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.398 03:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.398 03:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:15.398 03:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.398 03:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.398 03:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.398 03:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.398 03:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.398 03:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.398 03:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.398 03:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.398 03:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.398 03:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.398 03:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.398 03:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.398 "name": "Existed_Raid", 00:11:15.398 "uuid": "41f19086-a642-4a23-ae0e-e91ff75664b0", 00:11:15.398 "strip_size_kb": 64, 00:11:15.398 "state": "configuring", 00:11:15.398 "raid_level": "raid0", 00:11:15.398 "superblock": true, 00:11:15.398 "num_base_bdevs": 4, 00:11:15.398 "num_base_bdevs_discovered": 3, 00:11:15.398 "num_base_bdevs_operational": 4, 00:11:15.398 "base_bdevs_list": [ 00:11:15.398 { 00:11:15.398 "name": "BaseBdev1", 00:11:15.398 "uuid": "534e2a2b-cb9c-4f26-8173-a466257d5fc2", 00:11:15.398 "is_configured": true, 00:11:15.398 "data_offset": 2048, 00:11:15.398 "data_size": 63488 00:11:15.398 }, 00:11:15.398 { 00:11:15.398 "name": "BaseBdev2", 00:11:15.398 "uuid": "1ca63ac7-0fac-4a2f-9f8d-dec565aed532", 00:11:15.398 "is_configured": true, 00:11:15.398 "data_offset": 2048, 00:11:15.398 "data_size": 63488 00:11:15.398 }, 00:11:15.398 { 00:11:15.398 "name": "BaseBdev3", 00:11:15.398 "uuid": "8a27df5a-c471-4747-a862-5a91ec0ed7f1", 00:11:15.398 "is_configured": true, 00:11:15.398 "data_offset": 2048, 00:11:15.398 "data_size": 63488 00:11:15.398 }, 00:11:15.398 { 00:11:15.398 "name": "BaseBdev4", 00:11:15.398 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.398 "is_configured": false, 00:11:15.398 "data_offset": 0, 00:11:15.398 "data_size": 0 00:11:15.398 } 00:11:15.398 ] 00:11:15.398 }' 00:11:15.398 03:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.398 03:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.658 03:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:15.658 03:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.658 03:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.918 [2024-10-09 03:13:58.965155] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:15.918 [2024-10-09 03:13:58.965549] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:15.918 [2024-10-09 03:13:58.965572] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:15.918 [2024-10-09 03:13:58.965889] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:15.918 [2024-10-09 03:13:58.966069] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:15.918 [2024-10-09 03:13:58.966093] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:15.918 BaseBdev4 00:11:15.918 [2024-10-09 03:13:58.966259] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:15.918 03:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.918 03:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:15.918 03:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:15.918 03:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:15.918 03:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:15.918 03:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:15.918 03:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:15.918 03:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:15.918 03:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.918 03:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.918 03:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.918 03:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:15.918 03:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.918 03:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.918 [ 00:11:15.918 { 00:11:15.918 "name": "BaseBdev4", 00:11:15.918 "aliases": [ 00:11:15.918 "c5cf6294-92b3-4f73-ac59-888c51c12f4f" 00:11:15.918 ], 00:11:15.918 "product_name": "Malloc disk", 00:11:15.918 "block_size": 512, 00:11:15.918 "num_blocks": 65536, 00:11:15.918 "uuid": "c5cf6294-92b3-4f73-ac59-888c51c12f4f", 00:11:15.918 "assigned_rate_limits": { 00:11:15.918 "rw_ios_per_sec": 0, 00:11:15.918 "rw_mbytes_per_sec": 0, 00:11:15.918 "r_mbytes_per_sec": 0, 00:11:15.918 "w_mbytes_per_sec": 0 00:11:15.918 }, 00:11:15.918 "claimed": true, 00:11:15.918 "claim_type": "exclusive_write", 00:11:15.918 "zoned": false, 00:11:15.918 "supported_io_types": { 00:11:15.918 "read": true, 00:11:15.918 "write": true, 00:11:15.918 "unmap": true, 00:11:15.918 "flush": true, 00:11:15.918 "reset": true, 00:11:15.918 "nvme_admin": false, 00:11:15.918 "nvme_io": false, 00:11:15.918 "nvme_io_md": false, 00:11:15.918 "write_zeroes": true, 00:11:15.918 "zcopy": true, 00:11:15.918 "get_zone_info": false, 00:11:15.918 "zone_management": false, 00:11:15.918 "zone_append": false, 00:11:15.918 "compare": false, 00:11:15.918 "compare_and_write": false, 00:11:15.918 "abort": true, 00:11:15.918 "seek_hole": false, 00:11:15.918 "seek_data": false, 00:11:15.918 "copy": true, 00:11:15.918 "nvme_iov_md": false 00:11:15.918 }, 00:11:15.918 "memory_domains": [ 00:11:15.918 { 00:11:15.918 "dma_device_id": "system", 00:11:15.918 "dma_device_type": 1 00:11:15.918 }, 00:11:15.918 { 00:11:15.918 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.918 "dma_device_type": 2 00:11:15.918 } 00:11:15.918 ], 00:11:15.918 "driver_specific": {} 00:11:15.918 } 00:11:15.918 ] 00:11:15.918 03:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.918 03:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:15.918 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:15.918 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:15.918 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:15.918 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.918 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:15.918 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:15.918 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:15.918 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:15.918 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.918 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.918 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.918 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.918 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.918 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.918 03:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:15.918 03:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:15.918 03:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:15.918 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.918 "name": "Existed_Raid", 00:11:15.918 "uuid": "41f19086-a642-4a23-ae0e-e91ff75664b0", 00:11:15.918 "strip_size_kb": 64, 00:11:15.918 "state": "online", 00:11:15.918 "raid_level": "raid0", 00:11:15.918 "superblock": true, 00:11:15.918 "num_base_bdevs": 4, 00:11:15.918 "num_base_bdevs_discovered": 4, 00:11:15.918 "num_base_bdevs_operational": 4, 00:11:15.918 "base_bdevs_list": [ 00:11:15.918 { 00:11:15.918 "name": "BaseBdev1", 00:11:15.918 "uuid": "534e2a2b-cb9c-4f26-8173-a466257d5fc2", 00:11:15.918 "is_configured": true, 00:11:15.918 "data_offset": 2048, 00:11:15.918 "data_size": 63488 00:11:15.918 }, 00:11:15.918 { 00:11:15.918 "name": "BaseBdev2", 00:11:15.918 "uuid": "1ca63ac7-0fac-4a2f-9f8d-dec565aed532", 00:11:15.918 "is_configured": true, 00:11:15.918 "data_offset": 2048, 00:11:15.918 "data_size": 63488 00:11:15.918 }, 00:11:15.918 { 00:11:15.918 "name": "BaseBdev3", 00:11:15.918 "uuid": "8a27df5a-c471-4747-a862-5a91ec0ed7f1", 00:11:15.918 "is_configured": true, 00:11:15.918 "data_offset": 2048, 00:11:15.918 "data_size": 63488 00:11:15.918 }, 00:11:15.918 { 00:11:15.918 "name": "BaseBdev4", 00:11:15.918 "uuid": "c5cf6294-92b3-4f73-ac59-888c51c12f4f", 00:11:15.918 "is_configured": true, 00:11:15.918 "data_offset": 2048, 00:11:15.918 "data_size": 63488 00:11:15.918 } 00:11:15.918 ] 00:11:15.918 }' 00:11:15.918 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.918 03:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.178 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:16.178 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:16.178 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:16.178 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:16.178 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:16.178 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:16.178 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:16.178 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:16.178 03:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.178 03:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.178 [2024-10-09 03:13:59.473371] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:16.438 03:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.438 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:16.438 "name": "Existed_Raid", 00:11:16.438 "aliases": [ 00:11:16.438 "41f19086-a642-4a23-ae0e-e91ff75664b0" 00:11:16.438 ], 00:11:16.438 "product_name": "Raid Volume", 00:11:16.438 "block_size": 512, 00:11:16.438 "num_blocks": 253952, 00:11:16.438 "uuid": "41f19086-a642-4a23-ae0e-e91ff75664b0", 00:11:16.438 "assigned_rate_limits": { 00:11:16.438 "rw_ios_per_sec": 0, 00:11:16.438 "rw_mbytes_per_sec": 0, 00:11:16.438 "r_mbytes_per_sec": 0, 00:11:16.438 "w_mbytes_per_sec": 0 00:11:16.438 }, 00:11:16.438 "claimed": false, 00:11:16.438 "zoned": false, 00:11:16.438 "supported_io_types": { 00:11:16.438 "read": true, 00:11:16.438 "write": true, 00:11:16.438 "unmap": true, 00:11:16.438 "flush": true, 00:11:16.438 "reset": true, 00:11:16.438 "nvme_admin": false, 00:11:16.438 "nvme_io": false, 00:11:16.438 "nvme_io_md": false, 00:11:16.438 "write_zeroes": true, 00:11:16.438 "zcopy": false, 00:11:16.438 "get_zone_info": false, 00:11:16.438 "zone_management": false, 00:11:16.438 "zone_append": false, 00:11:16.438 "compare": false, 00:11:16.438 "compare_and_write": false, 00:11:16.438 "abort": false, 00:11:16.438 "seek_hole": false, 00:11:16.438 "seek_data": false, 00:11:16.438 "copy": false, 00:11:16.438 "nvme_iov_md": false 00:11:16.438 }, 00:11:16.438 "memory_domains": [ 00:11:16.438 { 00:11:16.438 "dma_device_id": "system", 00:11:16.438 "dma_device_type": 1 00:11:16.438 }, 00:11:16.438 { 00:11:16.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.438 "dma_device_type": 2 00:11:16.438 }, 00:11:16.438 { 00:11:16.438 "dma_device_id": "system", 00:11:16.438 "dma_device_type": 1 00:11:16.438 }, 00:11:16.438 { 00:11:16.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.438 "dma_device_type": 2 00:11:16.438 }, 00:11:16.438 { 00:11:16.438 "dma_device_id": "system", 00:11:16.438 "dma_device_type": 1 00:11:16.438 }, 00:11:16.438 { 00:11:16.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.438 "dma_device_type": 2 00:11:16.438 }, 00:11:16.438 { 00:11:16.438 "dma_device_id": "system", 00:11:16.438 "dma_device_type": 1 00:11:16.438 }, 00:11:16.438 { 00:11:16.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.438 "dma_device_type": 2 00:11:16.438 } 00:11:16.438 ], 00:11:16.438 "driver_specific": { 00:11:16.438 "raid": { 00:11:16.438 "uuid": "41f19086-a642-4a23-ae0e-e91ff75664b0", 00:11:16.438 "strip_size_kb": 64, 00:11:16.438 "state": "online", 00:11:16.438 "raid_level": "raid0", 00:11:16.438 "superblock": true, 00:11:16.438 "num_base_bdevs": 4, 00:11:16.438 "num_base_bdevs_discovered": 4, 00:11:16.438 "num_base_bdevs_operational": 4, 00:11:16.438 "base_bdevs_list": [ 00:11:16.438 { 00:11:16.438 "name": "BaseBdev1", 00:11:16.438 "uuid": "534e2a2b-cb9c-4f26-8173-a466257d5fc2", 00:11:16.438 "is_configured": true, 00:11:16.438 "data_offset": 2048, 00:11:16.438 "data_size": 63488 00:11:16.438 }, 00:11:16.438 { 00:11:16.438 "name": "BaseBdev2", 00:11:16.438 "uuid": "1ca63ac7-0fac-4a2f-9f8d-dec565aed532", 00:11:16.438 "is_configured": true, 00:11:16.438 "data_offset": 2048, 00:11:16.438 "data_size": 63488 00:11:16.438 }, 00:11:16.438 { 00:11:16.438 "name": "BaseBdev3", 00:11:16.438 "uuid": "8a27df5a-c471-4747-a862-5a91ec0ed7f1", 00:11:16.438 "is_configured": true, 00:11:16.438 "data_offset": 2048, 00:11:16.438 "data_size": 63488 00:11:16.438 }, 00:11:16.438 { 00:11:16.438 "name": "BaseBdev4", 00:11:16.438 "uuid": "c5cf6294-92b3-4f73-ac59-888c51c12f4f", 00:11:16.438 "is_configured": true, 00:11:16.438 "data_offset": 2048, 00:11:16.438 "data_size": 63488 00:11:16.438 } 00:11:16.438 ] 00:11:16.438 } 00:11:16.438 } 00:11:16.438 }' 00:11:16.438 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:16.438 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:16.438 BaseBdev2 00:11:16.438 BaseBdev3 00:11:16.438 BaseBdev4' 00:11:16.438 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.439 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:16.439 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.439 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:16.439 03:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.439 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.439 03:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.439 03:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.439 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.439 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.439 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.439 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:16.439 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.439 03:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.439 03:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.439 03:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.439 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.439 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.439 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.439 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:16.439 03:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.439 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.439 03:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.699 [2024-10-09 03:13:59.821022] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:16.699 [2024-10-09 03:13:59.821101] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:16.699 [2024-10-09 03:13:59.821166] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.699 "name": "Existed_Raid", 00:11:16.699 "uuid": "41f19086-a642-4a23-ae0e-e91ff75664b0", 00:11:16.699 "strip_size_kb": 64, 00:11:16.699 "state": "offline", 00:11:16.699 "raid_level": "raid0", 00:11:16.699 "superblock": true, 00:11:16.699 "num_base_bdevs": 4, 00:11:16.699 "num_base_bdevs_discovered": 3, 00:11:16.699 "num_base_bdevs_operational": 3, 00:11:16.699 "base_bdevs_list": [ 00:11:16.699 { 00:11:16.699 "name": null, 00:11:16.699 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.699 "is_configured": false, 00:11:16.699 "data_offset": 0, 00:11:16.699 "data_size": 63488 00:11:16.699 }, 00:11:16.699 { 00:11:16.699 "name": "BaseBdev2", 00:11:16.699 "uuid": "1ca63ac7-0fac-4a2f-9f8d-dec565aed532", 00:11:16.699 "is_configured": true, 00:11:16.699 "data_offset": 2048, 00:11:16.699 "data_size": 63488 00:11:16.699 }, 00:11:16.699 { 00:11:16.699 "name": "BaseBdev3", 00:11:16.699 "uuid": "8a27df5a-c471-4747-a862-5a91ec0ed7f1", 00:11:16.699 "is_configured": true, 00:11:16.699 "data_offset": 2048, 00:11:16.699 "data_size": 63488 00:11:16.699 }, 00:11:16.699 { 00:11:16.699 "name": "BaseBdev4", 00:11:16.699 "uuid": "c5cf6294-92b3-4f73-ac59-888c51c12f4f", 00:11:16.699 "is_configured": true, 00:11:16.699 "data_offset": 2048, 00:11:16.699 "data_size": 63488 00:11:16.699 } 00:11:16.699 ] 00:11:16.699 }' 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.699 03:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.269 03:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:17.269 03:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:17.269 03:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.269 03:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:17.269 03:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.269 03:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.269 03:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.269 03:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:17.269 03:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:17.269 03:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:17.269 03:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.269 03:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.269 [2024-10-09 03:14:00.445774] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:17.269 03:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.269 03:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:17.269 03:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:17.269 03:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.269 03:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:17.269 03:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.269 03:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.529 03:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.529 03:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:17.529 03:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:17.529 03:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:17.529 03:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.529 03:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.529 [2024-10-09 03:14:00.612034] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:17.529 03:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.529 03:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:17.529 03:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:17.529 03:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.529 03:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:17.529 03:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.529 03:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.529 03:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.529 03:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:17.529 03:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:17.529 03:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:17.529 03:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.529 03:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.529 [2024-10-09 03:14:00.773664] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:17.529 [2024-10-09 03:14:00.773787] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:17.789 03:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.789 03:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:17.789 03:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:17.790 03:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.790 03:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:17.790 03:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.790 03:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.790 03:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.790 03:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:17.790 03:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:17.790 03:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:17.790 03:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:17.790 03:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:17.790 03:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:17.790 03:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.790 03:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.790 BaseBdev2 00:11:17.790 03:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.790 03:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:17.790 03:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:17.790 03:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:17.790 03:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:17.790 03:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:17.790 03:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:17.790 03:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:17.790 03:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.790 03:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.790 03:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.790 03:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:17.790 03:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.790 03:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.790 [ 00:11:17.790 { 00:11:17.790 "name": "BaseBdev2", 00:11:17.790 "aliases": [ 00:11:17.790 "5ee32154-9c37-4e92-98a9-ebd95e282826" 00:11:17.790 ], 00:11:17.790 "product_name": "Malloc disk", 00:11:17.790 "block_size": 512, 00:11:17.790 "num_blocks": 65536, 00:11:17.790 "uuid": "5ee32154-9c37-4e92-98a9-ebd95e282826", 00:11:17.790 "assigned_rate_limits": { 00:11:17.790 "rw_ios_per_sec": 0, 00:11:17.790 "rw_mbytes_per_sec": 0, 00:11:17.790 "r_mbytes_per_sec": 0, 00:11:17.790 "w_mbytes_per_sec": 0 00:11:17.790 }, 00:11:17.790 "claimed": false, 00:11:17.790 "zoned": false, 00:11:17.790 "supported_io_types": { 00:11:17.790 "read": true, 00:11:17.790 "write": true, 00:11:17.790 "unmap": true, 00:11:17.790 "flush": true, 00:11:17.790 "reset": true, 00:11:17.790 "nvme_admin": false, 00:11:17.790 "nvme_io": false, 00:11:17.790 "nvme_io_md": false, 00:11:17.790 "write_zeroes": true, 00:11:17.790 "zcopy": true, 00:11:17.790 "get_zone_info": false, 00:11:17.790 "zone_management": false, 00:11:17.790 "zone_append": false, 00:11:17.790 "compare": false, 00:11:17.790 "compare_and_write": false, 00:11:17.790 "abort": true, 00:11:17.790 "seek_hole": false, 00:11:17.790 "seek_data": false, 00:11:17.790 "copy": true, 00:11:17.790 "nvme_iov_md": false 00:11:17.790 }, 00:11:17.790 "memory_domains": [ 00:11:17.790 { 00:11:17.790 "dma_device_id": "system", 00:11:17.790 "dma_device_type": 1 00:11:17.790 }, 00:11:17.790 { 00:11:17.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.790 "dma_device_type": 2 00:11:17.790 } 00:11:17.790 ], 00:11:17.790 "driver_specific": {} 00:11:17.790 } 00:11:17.790 ] 00:11:17.790 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.790 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:17.790 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:17.790 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:17.790 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:17.790 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.790 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.790 BaseBdev3 00:11:17.790 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.790 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:17.790 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:17.790 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:17.790 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:17.790 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:17.790 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:17.790 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:17.790 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.790 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.790 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:17.790 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:17.790 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:17.790 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:17.790 [ 00:11:17.790 { 00:11:17.790 "name": "BaseBdev3", 00:11:17.790 "aliases": [ 00:11:17.790 "937eb9c8-be90-4ce5-b6aa-76c4e73d8bfe" 00:11:17.790 ], 00:11:17.790 "product_name": "Malloc disk", 00:11:17.790 "block_size": 512, 00:11:17.790 "num_blocks": 65536, 00:11:17.790 "uuid": "937eb9c8-be90-4ce5-b6aa-76c4e73d8bfe", 00:11:17.790 "assigned_rate_limits": { 00:11:17.790 "rw_ios_per_sec": 0, 00:11:17.790 "rw_mbytes_per_sec": 0, 00:11:17.790 "r_mbytes_per_sec": 0, 00:11:17.790 "w_mbytes_per_sec": 0 00:11:17.790 }, 00:11:17.790 "claimed": false, 00:11:17.790 "zoned": false, 00:11:17.790 "supported_io_types": { 00:11:17.790 "read": true, 00:11:17.790 "write": true, 00:11:17.790 "unmap": true, 00:11:17.790 "flush": true, 00:11:17.790 "reset": true, 00:11:17.790 "nvme_admin": false, 00:11:17.790 "nvme_io": false, 00:11:17.790 "nvme_io_md": false, 00:11:17.790 "write_zeroes": true, 00:11:17.790 "zcopy": true, 00:11:17.790 "get_zone_info": false, 00:11:17.790 "zone_management": false, 00:11:17.790 "zone_append": false, 00:11:17.790 "compare": false, 00:11:17.790 "compare_and_write": false, 00:11:17.790 "abort": true, 00:11:17.790 "seek_hole": false, 00:11:17.790 "seek_data": false, 00:11:18.050 "copy": true, 00:11:18.050 "nvme_iov_md": false 00:11:18.050 }, 00:11:18.050 "memory_domains": [ 00:11:18.050 { 00:11:18.050 "dma_device_id": "system", 00:11:18.050 "dma_device_type": 1 00:11:18.050 }, 00:11:18.050 { 00:11:18.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.050 "dma_device_type": 2 00:11:18.050 } 00:11:18.050 ], 00:11:18.050 "driver_specific": {} 00:11:18.050 } 00:11:18.050 ] 00:11:18.050 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.050 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:18.050 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:18.050 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:18.050 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:18.050 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.050 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.050 BaseBdev4 00:11:18.050 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.050 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:18.050 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:18.050 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:18.050 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:18.050 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:18.050 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:18.050 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:18.050 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.050 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.050 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.050 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:18.050 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.050 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.050 [ 00:11:18.050 { 00:11:18.051 "name": "BaseBdev4", 00:11:18.051 "aliases": [ 00:11:18.051 "4effa299-6e81-4c9b-9223-762ebfccdaa6" 00:11:18.051 ], 00:11:18.051 "product_name": "Malloc disk", 00:11:18.051 "block_size": 512, 00:11:18.051 "num_blocks": 65536, 00:11:18.051 "uuid": "4effa299-6e81-4c9b-9223-762ebfccdaa6", 00:11:18.051 "assigned_rate_limits": { 00:11:18.051 "rw_ios_per_sec": 0, 00:11:18.051 "rw_mbytes_per_sec": 0, 00:11:18.051 "r_mbytes_per_sec": 0, 00:11:18.051 "w_mbytes_per_sec": 0 00:11:18.051 }, 00:11:18.051 "claimed": false, 00:11:18.051 "zoned": false, 00:11:18.051 "supported_io_types": { 00:11:18.051 "read": true, 00:11:18.051 "write": true, 00:11:18.051 "unmap": true, 00:11:18.051 "flush": true, 00:11:18.051 "reset": true, 00:11:18.051 "nvme_admin": false, 00:11:18.051 "nvme_io": false, 00:11:18.051 "nvme_io_md": false, 00:11:18.051 "write_zeroes": true, 00:11:18.051 "zcopy": true, 00:11:18.051 "get_zone_info": false, 00:11:18.051 "zone_management": false, 00:11:18.051 "zone_append": false, 00:11:18.051 "compare": false, 00:11:18.051 "compare_and_write": false, 00:11:18.051 "abort": true, 00:11:18.051 "seek_hole": false, 00:11:18.051 "seek_data": false, 00:11:18.051 "copy": true, 00:11:18.051 "nvme_iov_md": false 00:11:18.051 }, 00:11:18.051 "memory_domains": [ 00:11:18.051 { 00:11:18.051 "dma_device_id": "system", 00:11:18.051 "dma_device_type": 1 00:11:18.051 }, 00:11:18.051 { 00:11:18.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.051 "dma_device_type": 2 00:11:18.051 } 00:11:18.051 ], 00:11:18.051 "driver_specific": {} 00:11:18.051 } 00:11:18.051 ] 00:11:18.051 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.051 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:18.051 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:18.051 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:18.051 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:18.051 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.051 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.051 [2024-10-09 03:14:01.184780] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:18.051 [2024-10-09 03:14:01.184893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:18.051 [2024-10-09 03:14:01.184946] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:18.051 [2024-10-09 03:14:01.186760] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:18.051 [2024-10-09 03:14:01.186864] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:18.051 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.051 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:18.051 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.051 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.051 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:18.051 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.051 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.051 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.051 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.051 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.051 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.051 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.051 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.051 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.051 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.051 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.051 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.051 "name": "Existed_Raid", 00:11:18.051 "uuid": "833fc7fe-d54a-4903-9b3d-7319fa222c8f", 00:11:18.051 "strip_size_kb": 64, 00:11:18.051 "state": "configuring", 00:11:18.051 "raid_level": "raid0", 00:11:18.051 "superblock": true, 00:11:18.051 "num_base_bdevs": 4, 00:11:18.051 "num_base_bdevs_discovered": 3, 00:11:18.051 "num_base_bdevs_operational": 4, 00:11:18.051 "base_bdevs_list": [ 00:11:18.051 { 00:11:18.051 "name": "BaseBdev1", 00:11:18.051 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.051 "is_configured": false, 00:11:18.051 "data_offset": 0, 00:11:18.051 "data_size": 0 00:11:18.051 }, 00:11:18.051 { 00:11:18.051 "name": "BaseBdev2", 00:11:18.051 "uuid": "5ee32154-9c37-4e92-98a9-ebd95e282826", 00:11:18.051 "is_configured": true, 00:11:18.051 "data_offset": 2048, 00:11:18.051 "data_size": 63488 00:11:18.051 }, 00:11:18.051 { 00:11:18.051 "name": "BaseBdev3", 00:11:18.051 "uuid": "937eb9c8-be90-4ce5-b6aa-76c4e73d8bfe", 00:11:18.051 "is_configured": true, 00:11:18.051 "data_offset": 2048, 00:11:18.051 "data_size": 63488 00:11:18.051 }, 00:11:18.051 { 00:11:18.051 "name": "BaseBdev4", 00:11:18.051 "uuid": "4effa299-6e81-4c9b-9223-762ebfccdaa6", 00:11:18.051 "is_configured": true, 00:11:18.051 "data_offset": 2048, 00:11:18.051 "data_size": 63488 00:11:18.051 } 00:11:18.051 ] 00:11:18.051 }' 00:11:18.051 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.051 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.644 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:18.644 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.644 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.644 [2024-10-09 03:14:01.671956] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:18.644 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.644 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:18.644 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.644 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.644 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:18.644 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:18.644 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:18.644 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.644 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.644 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.644 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.644 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.644 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.644 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.644 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.644 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.644 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.644 "name": "Existed_Raid", 00:11:18.644 "uuid": "833fc7fe-d54a-4903-9b3d-7319fa222c8f", 00:11:18.644 "strip_size_kb": 64, 00:11:18.644 "state": "configuring", 00:11:18.644 "raid_level": "raid0", 00:11:18.644 "superblock": true, 00:11:18.644 "num_base_bdevs": 4, 00:11:18.644 "num_base_bdevs_discovered": 2, 00:11:18.644 "num_base_bdevs_operational": 4, 00:11:18.644 "base_bdevs_list": [ 00:11:18.644 { 00:11:18.644 "name": "BaseBdev1", 00:11:18.644 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.644 "is_configured": false, 00:11:18.644 "data_offset": 0, 00:11:18.644 "data_size": 0 00:11:18.644 }, 00:11:18.644 { 00:11:18.644 "name": null, 00:11:18.644 "uuid": "5ee32154-9c37-4e92-98a9-ebd95e282826", 00:11:18.644 "is_configured": false, 00:11:18.644 "data_offset": 0, 00:11:18.644 "data_size": 63488 00:11:18.644 }, 00:11:18.644 { 00:11:18.644 "name": "BaseBdev3", 00:11:18.644 "uuid": "937eb9c8-be90-4ce5-b6aa-76c4e73d8bfe", 00:11:18.644 "is_configured": true, 00:11:18.644 "data_offset": 2048, 00:11:18.644 "data_size": 63488 00:11:18.644 }, 00:11:18.644 { 00:11:18.644 "name": "BaseBdev4", 00:11:18.644 "uuid": "4effa299-6e81-4c9b-9223-762ebfccdaa6", 00:11:18.644 "is_configured": true, 00:11:18.644 "data_offset": 2048, 00:11:18.644 "data_size": 63488 00:11:18.644 } 00:11:18.644 ] 00:11:18.644 }' 00:11:18.644 03:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.644 03:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.904 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.904 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:18.904 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.904 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:18.904 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:18.904 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:18.904 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:18.904 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:18.904 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.164 [2024-10-09 03:14:02.235799] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:19.164 BaseBdev1 00:11:19.164 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.164 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:19.164 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:19.164 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:19.164 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:19.164 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:19.164 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:19.164 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:19.164 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.164 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.164 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.164 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:19.164 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.164 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.164 [ 00:11:19.164 { 00:11:19.164 "name": "BaseBdev1", 00:11:19.164 "aliases": [ 00:11:19.164 "9964ec81-1486-4aae-b073-996f611af2f2" 00:11:19.164 ], 00:11:19.164 "product_name": "Malloc disk", 00:11:19.164 "block_size": 512, 00:11:19.164 "num_blocks": 65536, 00:11:19.164 "uuid": "9964ec81-1486-4aae-b073-996f611af2f2", 00:11:19.164 "assigned_rate_limits": { 00:11:19.164 "rw_ios_per_sec": 0, 00:11:19.164 "rw_mbytes_per_sec": 0, 00:11:19.164 "r_mbytes_per_sec": 0, 00:11:19.164 "w_mbytes_per_sec": 0 00:11:19.164 }, 00:11:19.164 "claimed": true, 00:11:19.164 "claim_type": "exclusive_write", 00:11:19.164 "zoned": false, 00:11:19.164 "supported_io_types": { 00:11:19.164 "read": true, 00:11:19.164 "write": true, 00:11:19.164 "unmap": true, 00:11:19.164 "flush": true, 00:11:19.164 "reset": true, 00:11:19.164 "nvme_admin": false, 00:11:19.164 "nvme_io": false, 00:11:19.164 "nvme_io_md": false, 00:11:19.164 "write_zeroes": true, 00:11:19.164 "zcopy": true, 00:11:19.164 "get_zone_info": false, 00:11:19.164 "zone_management": false, 00:11:19.164 "zone_append": false, 00:11:19.164 "compare": false, 00:11:19.164 "compare_and_write": false, 00:11:19.164 "abort": true, 00:11:19.164 "seek_hole": false, 00:11:19.164 "seek_data": false, 00:11:19.164 "copy": true, 00:11:19.164 "nvme_iov_md": false 00:11:19.164 }, 00:11:19.164 "memory_domains": [ 00:11:19.164 { 00:11:19.164 "dma_device_id": "system", 00:11:19.164 "dma_device_type": 1 00:11:19.164 }, 00:11:19.164 { 00:11:19.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.164 "dma_device_type": 2 00:11:19.164 } 00:11:19.164 ], 00:11:19.164 "driver_specific": {} 00:11:19.164 } 00:11:19.164 ] 00:11:19.164 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.164 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:19.164 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:19.164 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.164 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.164 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:19.164 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.164 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.164 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.164 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.164 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.164 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.164 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.164 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.164 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.164 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.164 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.164 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.164 "name": "Existed_Raid", 00:11:19.164 "uuid": "833fc7fe-d54a-4903-9b3d-7319fa222c8f", 00:11:19.164 "strip_size_kb": 64, 00:11:19.164 "state": "configuring", 00:11:19.164 "raid_level": "raid0", 00:11:19.164 "superblock": true, 00:11:19.164 "num_base_bdevs": 4, 00:11:19.164 "num_base_bdevs_discovered": 3, 00:11:19.164 "num_base_bdevs_operational": 4, 00:11:19.164 "base_bdevs_list": [ 00:11:19.164 { 00:11:19.164 "name": "BaseBdev1", 00:11:19.164 "uuid": "9964ec81-1486-4aae-b073-996f611af2f2", 00:11:19.164 "is_configured": true, 00:11:19.164 "data_offset": 2048, 00:11:19.164 "data_size": 63488 00:11:19.164 }, 00:11:19.164 { 00:11:19.164 "name": null, 00:11:19.164 "uuid": "5ee32154-9c37-4e92-98a9-ebd95e282826", 00:11:19.164 "is_configured": false, 00:11:19.164 "data_offset": 0, 00:11:19.164 "data_size": 63488 00:11:19.164 }, 00:11:19.164 { 00:11:19.164 "name": "BaseBdev3", 00:11:19.164 "uuid": "937eb9c8-be90-4ce5-b6aa-76c4e73d8bfe", 00:11:19.164 "is_configured": true, 00:11:19.164 "data_offset": 2048, 00:11:19.164 "data_size": 63488 00:11:19.164 }, 00:11:19.164 { 00:11:19.164 "name": "BaseBdev4", 00:11:19.164 "uuid": "4effa299-6e81-4c9b-9223-762ebfccdaa6", 00:11:19.164 "is_configured": true, 00:11:19.164 "data_offset": 2048, 00:11:19.164 "data_size": 63488 00:11:19.164 } 00:11:19.164 ] 00:11:19.164 }' 00:11:19.164 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.164 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.423 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:19.423 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.423 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.423 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.423 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.423 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:19.423 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:19.423 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.423 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.423 [2024-10-09 03:14:02.719053] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:19.682 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.682 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:19.682 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.682 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.682 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:19.682 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.682 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.682 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.682 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.682 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.682 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.682 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.682 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.682 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.682 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.682 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.682 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.682 "name": "Existed_Raid", 00:11:19.682 "uuid": "833fc7fe-d54a-4903-9b3d-7319fa222c8f", 00:11:19.682 "strip_size_kb": 64, 00:11:19.682 "state": "configuring", 00:11:19.682 "raid_level": "raid0", 00:11:19.682 "superblock": true, 00:11:19.682 "num_base_bdevs": 4, 00:11:19.682 "num_base_bdevs_discovered": 2, 00:11:19.682 "num_base_bdevs_operational": 4, 00:11:19.682 "base_bdevs_list": [ 00:11:19.682 { 00:11:19.682 "name": "BaseBdev1", 00:11:19.682 "uuid": "9964ec81-1486-4aae-b073-996f611af2f2", 00:11:19.682 "is_configured": true, 00:11:19.683 "data_offset": 2048, 00:11:19.683 "data_size": 63488 00:11:19.683 }, 00:11:19.683 { 00:11:19.683 "name": null, 00:11:19.683 "uuid": "5ee32154-9c37-4e92-98a9-ebd95e282826", 00:11:19.683 "is_configured": false, 00:11:19.683 "data_offset": 0, 00:11:19.683 "data_size": 63488 00:11:19.683 }, 00:11:19.683 { 00:11:19.683 "name": null, 00:11:19.683 "uuid": "937eb9c8-be90-4ce5-b6aa-76c4e73d8bfe", 00:11:19.683 "is_configured": false, 00:11:19.683 "data_offset": 0, 00:11:19.683 "data_size": 63488 00:11:19.683 }, 00:11:19.683 { 00:11:19.683 "name": "BaseBdev4", 00:11:19.683 "uuid": "4effa299-6e81-4c9b-9223-762ebfccdaa6", 00:11:19.683 "is_configured": true, 00:11:19.683 "data_offset": 2048, 00:11:19.683 "data_size": 63488 00:11:19.683 } 00:11:19.683 ] 00:11:19.683 }' 00:11:19.683 03:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.683 03:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.942 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:19.942 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.942 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.942 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.942 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.942 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:19.942 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:19.942 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.942 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.942 [2024-10-09 03:14:03.214432] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:19.942 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:19.942 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:19.942 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.942 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.942 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:19.942 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:19.942 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:19.942 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.942 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.942 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.942 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.942 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.942 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.942 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:19.942 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:19.942 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.202 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.202 "name": "Existed_Raid", 00:11:20.202 "uuid": "833fc7fe-d54a-4903-9b3d-7319fa222c8f", 00:11:20.202 "strip_size_kb": 64, 00:11:20.202 "state": "configuring", 00:11:20.202 "raid_level": "raid0", 00:11:20.202 "superblock": true, 00:11:20.202 "num_base_bdevs": 4, 00:11:20.202 "num_base_bdevs_discovered": 3, 00:11:20.202 "num_base_bdevs_operational": 4, 00:11:20.202 "base_bdevs_list": [ 00:11:20.202 { 00:11:20.202 "name": "BaseBdev1", 00:11:20.202 "uuid": "9964ec81-1486-4aae-b073-996f611af2f2", 00:11:20.202 "is_configured": true, 00:11:20.202 "data_offset": 2048, 00:11:20.202 "data_size": 63488 00:11:20.202 }, 00:11:20.202 { 00:11:20.202 "name": null, 00:11:20.202 "uuid": "5ee32154-9c37-4e92-98a9-ebd95e282826", 00:11:20.202 "is_configured": false, 00:11:20.202 "data_offset": 0, 00:11:20.202 "data_size": 63488 00:11:20.202 }, 00:11:20.202 { 00:11:20.202 "name": "BaseBdev3", 00:11:20.202 "uuid": "937eb9c8-be90-4ce5-b6aa-76c4e73d8bfe", 00:11:20.202 "is_configured": true, 00:11:20.202 "data_offset": 2048, 00:11:20.202 "data_size": 63488 00:11:20.202 }, 00:11:20.202 { 00:11:20.202 "name": "BaseBdev4", 00:11:20.202 "uuid": "4effa299-6e81-4c9b-9223-762ebfccdaa6", 00:11:20.202 "is_configured": true, 00:11:20.202 "data_offset": 2048, 00:11:20.202 "data_size": 63488 00:11:20.202 } 00:11:20.202 ] 00:11:20.202 }' 00:11:20.202 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.202 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.462 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.462 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:20.462 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.462 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.462 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.462 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:20.462 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:20.462 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.462 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.462 [2024-10-09 03:14:03.745492] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:20.722 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.722 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:20.722 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.722 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.722 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:20.722 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:20.722 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:20.722 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.722 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.722 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.722 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.722 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.722 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.722 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.722 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.722 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:20.722 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.722 "name": "Existed_Raid", 00:11:20.722 "uuid": "833fc7fe-d54a-4903-9b3d-7319fa222c8f", 00:11:20.722 "strip_size_kb": 64, 00:11:20.722 "state": "configuring", 00:11:20.722 "raid_level": "raid0", 00:11:20.722 "superblock": true, 00:11:20.722 "num_base_bdevs": 4, 00:11:20.722 "num_base_bdevs_discovered": 2, 00:11:20.722 "num_base_bdevs_operational": 4, 00:11:20.722 "base_bdevs_list": [ 00:11:20.722 { 00:11:20.722 "name": null, 00:11:20.722 "uuid": "9964ec81-1486-4aae-b073-996f611af2f2", 00:11:20.722 "is_configured": false, 00:11:20.722 "data_offset": 0, 00:11:20.722 "data_size": 63488 00:11:20.722 }, 00:11:20.722 { 00:11:20.722 "name": null, 00:11:20.723 "uuid": "5ee32154-9c37-4e92-98a9-ebd95e282826", 00:11:20.723 "is_configured": false, 00:11:20.723 "data_offset": 0, 00:11:20.723 "data_size": 63488 00:11:20.723 }, 00:11:20.723 { 00:11:20.723 "name": "BaseBdev3", 00:11:20.723 "uuid": "937eb9c8-be90-4ce5-b6aa-76c4e73d8bfe", 00:11:20.723 "is_configured": true, 00:11:20.723 "data_offset": 2048, 00:11:20.723 "data_size": 63488 00:11:20.723 }, 00:11:20.723 { 00:11:20.723 "name": "BaseBdev4", 00:11:20.723 "uuid": "4effa299-6e81-4c9b-9223-762ebfccdaa6", 00:11:20.723 "is_configured": true, 00:11:20.723 "data_offset": 2048, 00:11:20.723 "data_size": 63488 00:11:20.723 } 00:11:20.723 ] 00:11:20.723 }' 00:11:20.723 03:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.723 03:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.982 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.982 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:20.982 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:20.982 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:21.242 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.242 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:21.242 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:21.242 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.242 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.242 [2024-10-09 03:14:04.329886] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:21.242 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.242 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:21.242 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.242 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.242 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:21.242 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.242 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.242 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.242 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.242 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.242 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.242 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.242 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.242 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.242 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.242 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.242 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.242 "name": "Existed_Raid", 00:11:21.242 "uuid": "833fc7fe-d54a-4903-9b3d-7319fa222c8f", 00:11:21.242 "strip_size_kb": 64, 00:11:21.242 "state": "configuring", 00:11:21.242 "raid_level": "raid0", 00:11:21.242 "superblock": true, 00:11:21.242 "num_base_bdevs": 4, 00:11:21.242 "num_base_bdevs_discovered": 3, 00:11:21.242 "num_base_bdevs_operational": 4, 00:11:21.242 "base_bdevs_list": [ 00:11:21.242 { 00:11:21.242 "name": null, 00:11:21.242 "uuid": "9964ec81-1486-4aae-b073-996f611af2f2", 00:11:21.242 "is_configured": false, 00:11:21.242 "data_offset": 0, 00:11:21.242 "data_size": 63488 00:11:21.242 }, 00:11:21.242 { 00:11:21.242 "name": "BaseBdev2", 00:11:21.242 "uuid": "5ee32154-9c37-4e92-98a9-ebd95e282826", 00:11:21.242 "is_configured": true, 00:11:21.242 "data_offset": 2048, 00:11:21.242 "data_size": 63488 00:11:21.242 }, 00:11:21.242 { 00:11:21.242 "name": "BaseBdev3", 00:11:21.242 "uuid": "937eb9c8-be90-4ce5-b6aa-76c4e73d8bfe", 00:11:21.242 "is_configured": true, 00:11:21.242 "data_offset": 2048, 00:11:21.242 "data_size": 63488 00:11:21.242 }, 00:11:21.242 { 00:11:21.242 "name": "BaseBdev4", 00:11:21.242 "uuid": "4effa299-6e81-4c9b-9223-762ebfccdaa6", 00:11:21.242 "is_configured": true, 00:11:21.242 "data_offset": 2048, 00:11:21.242 "data_size": 63488 00:11:21.242 } 00:11:21.242 ] 00:11:21.242 }' 00:11:21.242 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.242 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.502 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.502 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.502 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.502 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:21.502 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.762 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:21.762 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.762 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:21.762 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.762 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.762 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.762 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 9964ec81-1486-4aae-b073-996f611af2f2 00:11:21.762 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.762 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.762 [2024-10-09 03:14:04.918318] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:21.762 [2024-10-09 03:14:04.918676] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:21.762 [2024-10-09 03:14:04.918743] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:21.762 [2024-10-09 03:14:04.919070] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:21.762 NewBaseBdev 00:11:21.762 [2024-10-09 03:14:04.919273] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:21.762 [2024-10-09 03:14:04.919288] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:21.762 [2024-10-09 03:14:04.919430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:21.762 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.762 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:21.762 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:21.762 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:21.762 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:21.762 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:21.762 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:21.762 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:21.762 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.762 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.762 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.762 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:21.762 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.762 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.762 [ 00:11:21.762 { 00:11:21.762 "name": "NewBaseBdev", 00:11:21.762 "aliases": [ 00:11:21.762 "9964ec81-1486-4aae-b073-996f611af2f2" 00:11:21.762 ], 00:11:21.762 "product_name": "Malloc disk", 00:11:21.762 "block_size": 512, 00:11:21.762 "num_blocks": 65536, 00:11:21.762 "uuid": "9964ec81-1486-4aae-b073-996f611af2f2", 00:11:21.762 "assigned_rate_limits": { 00:11:21.762 "rw_ios_per_sec": 0, 00:11:21.762 "rw_mbytes_per_sec": 0, 00:11:21.762 "r_mbytes_per_sec": 0, 00:11:21.762 "w_mbytes_per_sec": 0 00:11:21.762 }, 00:11:21.762 "claimed": true, 00:11:21.762 "claim_type": "exclusive_write", 00:11:21.762 "zoned": false, 00:11:21.762 "supported_io_types": { 00:11:21.762 "read": true, 00:11:21.762 "write": true, 00:11:21.762 "unmap": true, 00:11:21.762 "flush": true, 00:11:21.762 "reset": true, 00:11:21.762 "nvme_admin": false, 00:11:21.762 "nvme_io": false, 00:11:21.762 "nvme_io_md": false, 00:11:21.762 "write_zeroes": true, 00:11:21.762 "zcopy": true, 00:11:21.762 "get_zone_info": false, 00:11:21.762 "zone_management": false, 00:11:21.762 "zone_append": false, 00:11:21.762 "compare": false, 00:11:21.762 "compare_and_write": false, 00:11:21.762 "abort": true, 00:11:21.762 "seek_hole": false, 00:11:21.762 "seek_data": false, 00:11:21.762 "copy": true, 00:11:21.762 "nvme_iov_md": false 00:11:21.762 }, 00:11:21.762 "memory_domains": [ 00:11:21.762 { 00:11:21.762 "dma_device_id": "system", 00:11:21.762 "dma_device_type": 1 00:11:21.762 }, 00:11:21.762 { 00:11:21.763 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:21.763 "dma_device_type": 2 00:11:21.763 } 00:11:21.763 ], 00:11:21.763 "driver_specific": {} 00:11:21.763 } 00:11:21.763 ] 00:11:21.763 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.763 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:21.763 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:21.763 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.763 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:21.763 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:21.763 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:21.763 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:21.763 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.763 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.763 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.763 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.763 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.763 03:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.763 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.763 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:21.763 03:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.763 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.763 "name": "Existed_Raid", 00:11:21.763 "uuid": "833fc7fe-d54a-4903-9b3d-7319fa222c8f", 00:11:21.763 "strip_size_kb": 64, 00:11:21.763 "state": "online", 00:11:21.763 "raid_level": "raid0", 00:11:21.763 "superblock": true, 00:11:21.763 "num_base_bdevs": 4, 00:11:21.763 "num_base_bdevs_discovered": 4, 00:11:21.763 "num_base_bdevs_operational": 4, 00:11:21.763 "base_bdevs_list": [ 00:11:21.763 { 00:11:21.763 "name": "NewBaseBdev", 00:11:21.763 "uuid": "9964ec81-1486-4aae-b073-996f611af2f2", 00:11:21.763 "is_configured": true, 00:11:21.763 "data_offset": 2048, 00:11:21.763 "data_size": 63488 00:11:21.763 }, 00:11:21.763 { 00:11:21.763 "name": "BaseBdev2", 00:11:21.763 "uuid": "5ee32154-9c37-4e92-98a9-ebd95e282826", 00:11:21.763 "is_configured": true, 00:11:21.763 "data_offset": 2048, 00:11:21.763 "data_size": 63488 00:11:21.763 }, 00:11:21.763 { 00:11:21.763 "name": "BaseBdev3", 00:11:21.763 "uuid": "937eb9c8-be90-4ce5-b6aa-76c4e73d8bfe", 00:11:21.763 "is_configured": true, 00:11:21.763 "data_offset": 2048, 00:11:21.763 "data_size": 63488 00:11:21.763 }, 00:11:21.763 { 00:11:21.763 "name": "BaseBdev4", 00:11:21.763 "uuid": "4effa299-6e81-4c9b-9223-762ebfccdaa6", 00:11:21.763 "is_configured": true, 00:11:21.763 "data_offset": 2048, 00:11:21.763 "data_size": 63488 00:11:21.763 } 00:11:21.763 ] 00:11:21.763 }' 00:11:21.763 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.763 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.332 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:22.332 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:22.332 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:22.332 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:22.332 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:22.332 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:22.332 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:22.332 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:22.332 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.332 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.332 [2024-10-09 03:14:05.421978] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:22.332 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.332 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:22.332 "name": "Existed_Raid", 00:11:22.332 "aliases": [ 00:11:22.332 "833fc7fe-d54a-4903-9b3d-7319fa222c8f" 00:11:22.332 ], 00:11:22.332 "product_name": "Raid Volume", 00:11:22.332 "block_size": 512, 00:11:22.332 "num_blocks": 253952, 00:11:22.332 "uuid": "833fc7fe-d54a-4903-9b3d-7319fa222c8f", 00:11:22.332 "assigned_rate_limits": { 00:11:22.332 "rw_ios_per_sec": 0, 00:11:22.332 "rw_mbytes_per_sec": 0, 00:11:22.332 "r_mbytes_per_sec": 0, 00:11:22.332 "w_mbytes_per_sec": 0 00:11:22.332 }, 00:11:22.332 "claimed": false, 00:11:22.332 "zoned": false, 00:11:22.332 "supported_io_types": { 00:11:22.332 "read": true, 00:11:22.332 "write": true, 00:11:22.332 "unmap": true, 00:11:22.332 "flush": true, 00:11:22.332 "reset": true, 00:11:22.332 "nvme_admin": false, 00:11:22.332 "nvme_io": false, 00:11:22.332 "nvme_io_md": false, 00:11:22.332 "write_zeroes": true, 00:11:22.332 "zcopy": false, 00:11:22.332 "get_zone_info": false, 00:11:22.332 "zone_management": false, 00:11:22.332 "zone_append": false, 00:11:22.332 "compare": false, 00:11:22.332 "compare_and_write": false, 00:11:22.332 "abort": false, 00:11:22.332 "seek_hole": false, 00:11:22.332 "seek_data": false, 00:11:22.332 "copy": false, 00:11:22.332 "nvme_iov_md": false 00:11:22.332 }, 00:11:22.332 "memory_domains": [ 00:11:22.332 { 00:11:22.332 "dma_device_id": "system", 00:11:22.332 "dma_device_type": 1 00:11:22.332 }, 00:11:22.332 { 00:11:22.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.332 "dma_device_type": 2 00:11:22.332 }, 00:11:22.332 { 00:11:22.332 "dma_device_id": "system", 00:11:22.332 "dma_device_type": 1 00:11:22.332 }, 00:11:22.332 { 00:11:22.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.332 "dma_device_type": 2 00:11:22.332 }, 00:11:22.332 { 00:11:22.332 "dma_device_id": "system", 00:11:22.332 "dma_device_type": 1 00:11:22.332 }, 00:11:22.332 { 00:11:22.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.332 "dma_device_type": 2 00:11:22.332 }, 00:11:22.332 { 00:11:22.332 "dma_device_id": "system", 00:11:22.332 "dma_device_type": 1 00:11:22.332 }, 00:11:22.332 { 00:11:22.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.332 "dma_device_type": 2 00:11:22.332 } 00:11:22.332 ], 00:11:22.332 "driver_specific": { 00:11:22.332 "raid": { 00:11:22.332 "uuid": "833fc7fe-d54a-4903-9b3d-7319fa222c8f", 00:11:22.332 "strip_size_kb": 64, 00:11:22.332 "state": "online", 00:11:22.332 "raid_level": "raid0", 00:11:22.332 "superblock": true, 00:11:22.332 "num_base_bdevs": 4, 00:11:22.332 "num_base_bdevs_discovered": 4, 00:11:22.332 "num_base_bdevs_operational": 4, 00:11:22.333 "base_bdevs_list": [ 00:11:22.333 { 00:11:22.333 "name": "NewBaseBdev", 00:11:22.333 "uuid": "9964ec81-1486-4aae-b073-996f611af2f2", 00:11:22.333 "is_configured": true, 00:11:22.333 "data_offset": 2048, 00:11:22.333 "data_size": 63488 00:11:22.333 }, 00:11:22.333 { 00:11:22.333 "name": "BaseBdev2", 00:11:22.333 "uuid": "5ee32154-9c37-4e92-98a9-ebd95e282826", 00:11:22.333 "is_configured": true, 00:11:22.333 "data_offset": 2048, 00:11:22.333 "data_size": 63488 00:11:22.333 }, 00:11:22.333 { 00:11:22.333 "name": "BaseBdev3", 00:11:22.333 "uuid": "937eb9c8-be90-4ce5-b6aa-76c4e73d8bfe", 00:11:22.333 "is_configured": true, 00:11:22.333 "data_offset": 2048, 00:11:22.333 "data_size": 63488 00:11:22.333 }, 00:11:22.333 { 00:11:22.333 "name": "BaseBdev4", 00:11:22.333 "uuid": "4effa299-6e81-4c9b-9223-762ebfccdaa6", 00:11:22.333 "is_configured": true, 00:11:22.333 "data_offset": 2048, 00:11:22.333 "data_size": 63488 00:11:22.333 } 00:11:22.333 ] 00:11:22.333 } 00:11:22.333 } 00:11:22.333 }' 00:11:22.333 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:22.333 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:22.333 BaseBdev2 00:11:22.333 BaseBdev3 00:11:22.333 BaseBdev4' 00:11:22.333 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.333 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:22.333 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.333 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:22.333 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.333 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.333 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.333 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.333 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.333 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.333 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.333 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:22.333 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.333 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.333 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.333 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.592 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.592 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.592 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.592 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:22.592 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.592 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.592 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.592 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.592 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.592 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.592 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:22.592 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:22.592 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:22.592 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.592 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.592 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.592 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:22.592 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:22.592 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:22.592 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.592 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:22.592 [2024-10-09 03:14:05.752964] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:22.592 [2024-10-09 03:14:05.753087] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:22.592 [2024-10-09 03:14:05.753186] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:22.592 [2024-10-09 03:14:05.753262] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:22.592 [2024-10-09 03:14:05.753272] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:22.592 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.592 03:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70186 00:11:22.592 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 70186 ']' 00:11:22.592 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 70186 00:11:22.592 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:11:22.592 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:22.592 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70186 00:11:22.592 killing process with pid 70186 00:11:22.592 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:22.592 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:22.592 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70186' 00:11:22.592 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 70186 00:11:22.592 [2024-10-09 03:14:05.787284] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:22.592 03:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 70186 00:11:23.160 [2024-10-09 03:14:06.212164] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:24.539 ************************************ 00:11:24.539 END TEST raid_state_function_test_sb 00:11:24.539 ************************************ 00:11:24.539 03:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:24.539 00:11:24.539 real 0m12.255s 00:11:24.539 user 0m19.355s 00:11:24.539 sys 0m2.172s 00:11:24.539 03:14:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:24.539 03:14:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.539 03:14:07 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:11:24.539 03:14:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:24.539 03:14:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:24.539 03:14:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:24.539 ************************************ 00:11:24.539 START TEST raid_superblock_test 00:11:24.539 ************************************ 00:11:24.539 03:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid0 4 00:11:24.539 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:11:24.539 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:11:24.539 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:24.539 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:24.539 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:24.539 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:24.539 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:24.539 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:24.539 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:24.539 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:24.539 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:24.539 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:24.539 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:24.540 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:11:24.540 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:11:24.540 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:11:24.540 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70869 00:11:24.540 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:24.540 03:14:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70869 00:11:24.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.540 03:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 70869 ']' 00:11:24.540 03:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.540 03:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:24.540 03:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.540 03:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:24.540 03:14:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.540 [2024-10-09 03:14:07.728886] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:11:24.540 [2024-10-09 03:14:07.729117] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70869 ] 00:11:24.799 [2024-10-09 03:14:07.900035] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:25.058 [2024-10-09 03:14:08.158478] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.317 [2024-10-09 03:14:08.398236] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:25.317 [2024-10-09 03:14:08.398389] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:25.317 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:25.317 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:11:25.317 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:25.317 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:25.317 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:25.317 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:25.317 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:25.317 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:25.317 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:25.317 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:25.317 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:25.317 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.317 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.317 malloc1 00:11:25.317 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.317 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:25.317 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.317 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.317 [2024-10-09 03:14:08.611768] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:25.317 [2024-10-09 03:14:08.611862] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.317 [2024-10-09 03:14:08.611889] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:25.317 [2024-10-09 03:14:08.611903] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.317 [2024-10-09 03:14:08.614245] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.317 [2024-10-09 03:14:08.614282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:25.317 pt1 00:11:25.317 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.317 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:25.317 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:25.317 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:25.317 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:25.317 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:25.317 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:25.318 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:25.318 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:25.318 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:25.318 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.318 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.577 malloc2 00:11:25.577 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.577 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:25.577 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.577 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.577 [2024-10-09 03:14:08.686615] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:25.577 [2024-10-09 03:14:08.686750] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.577 [2024-10-09 03:14:08.686790] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:25.577 [2024-10-09 03:14:08.686818] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.577 [2024-10-09 03:14:08.689194] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.577 [2024-10-09 03:14:08.689268] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:25.577 pt2 00:11:25.577 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.577 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:25.577 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:25.577 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:25.577 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:25.577 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:25.577 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.578 malloc3 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.578 [2024-10-09 03:14:08.744036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:25.578 [2024-10-09 03:14:08.744161] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.578 [2024-10-09 03:14:08.744203] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:25.578 [2024-10-09 03:14:08.744232] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.578 [2024-10-09 03:14:08.746553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.578 [2024-10-09 03:14:08.746627] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:25.578 pt3 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.578 malloc4 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.578 [2024-10-09 03:14:08.810475] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:25.578 [2024-10-09 03:14:08.810592] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:25.578 [2024-10-09 03:14:08.810617] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:25.578 [2024-10-09 03:14:08.810627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:25.578 [2024-10-09 03:14:08.812972] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:25.578 [2024-10-09 03:14:08.813007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:25.578 pt4 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.578 [2024-10-09 03:14:08.822521] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:25.578 [2024-10-09 03:14:08.824614] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:25.578 [2024-10-09 03:14:08.824721] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:25.578 [2024-10-09 03:14:08.824802] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:25.578 [2024-10-09 03:14:08.825064] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:25.578 [2024-10-09 03:14:08.825119] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:25.578 [2024-10-09 03:14:08.825410] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:25.578 [2024-10-09 03:14:08.825612] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:25.578 [2024-10-09 03:14:08.825658] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:25.578 [2024-10-09 03:14:08.825834] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.578 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:25.838 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.838 "name": "raid_bdev1", 00:11:25.838 "uuid": "c452c327-95b8-4f57-a586-43bed6cb7468", 00:11:25.838 "strip_size_kb": 64, 00:11:25.838 "state": "online", 00:11:25.838 "raid_level": "raid0", 00:11:25.838 "superblock": true, 00:11:25.838 "num_base_bdevs": 4, 00:11:25.838 "num_base_bdevs_discovered": 4, 00:11:25.838 "num_base_bdevs_operational": 4, 00:11:25.838 "base_bdevs_list": [ 00:11:25.838 { 00:11:25.838 "name": "pt1", 00:11:25.838 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:25.838 "is_configured": true, 00:11:25.838 "data_offset": 2048, 00:11:25.838 "data_size": 63488 00:11:25.838 }, 00:11:25.838 { 00:11:25.838 "name": "pt2", 00:11:25.838 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:25.838 "is_configured": true, 00:11:25.838 "data_offset": 2048, 00:11:25.838 "data_size": 63488 00:11:25.838 }, 00:11:25.838 { 00:11:25.838 "name": "pt3", 00:11:25.838 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:25.838 "is_configured": true, 00:11:25.838 "data_offset": 2048, 00:11:25.838 "data_size": 63488 00:11:25.838 }, 00:11:25.838 { 00:11:25.838 "name": "pt4", 00:11:25.838 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:25.838 "is_configured": true, 00:11:25.838 "data_offset": 2048, 00:11:25.838 "data_size": 63488 00:11:25.838 } 00:11:25.838 ] 00:11:25.838 }' 00:11:25.838 03:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.838 03:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.098 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:26.098 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:26.098 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:26.098 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:26.098 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:26.098 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:26.098 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:26.098 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.098 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.098 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:26.098 [2024-10-09 03:14:09.266258] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:26.098 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.098 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:26.098 "name": "raid_bdev1", 00:11:26.098 "aliases": [ 00:11:26.098 "c452c327-95b8-4f57-a586-43bed6cb7468" 00:11:26.098 ], 00:11:26.098 "product_name": "Raid Volume", 00:11:26.098 "block_size": 512, 00:11:26.098 "num_blocks": 253952, 00:11:26.098 "uuid": "c452c327-95b8-4f57-a586-43bed6cb7468", 00:11:26.098 "assigned_rate_limits": { 00:11:26.098 "rw_ios_per_sec": 0, 00:11:26.098 "rw_mbytes_per_sec": 0, 00:11:26.098 "r_mbytes_per_sec": 0, 00:11:26.098 "w_mbytes_per_sec": 0 00:11:26.098 }, 00:11:26.098 "claimed": false, 00:11:26.098 "zoned": false, 00:11:26.098 "supported_io_types": { 00:11:26.098 "read": true, 00:11:26.098 "write": true, 00:11:26.098 "unmap": true, 00:11:26.098 "flush": true, 00:11:26.098 "reset": true, 00:11:26.098 "nvme_admin": false, 00:11:26.098 "nvme_io": false, 00:11:26.098 "nvme_io_md": false, 00:11:26.098 "write_zeroes": true, 00:11:26.098 "zcopy": false, 00:11:26.098 "get_zone_info": false, 00:11:26.098 "zone_management": false, 00:11:26.098 "zone_append": false, 00:11:26.098 "compare": false, 00:11:26.098 "compare_and_write": false, 00:11:26.098 "abort": false, 00:11:26.098 "seek_hole": false, 00:11:26.098 "seek_data": false, 00:11:26.098 "copy": false, 00:11:26.098 "nvme_iov_md": false 00:11:26.098 }, 00:11:26.098 "memory_domains": [ 00:11:26.098 { 00:11:26.098 "dma_device_id": "system", 00:11:26.098 "dma_device_type": 1 00:11:26.098 }, 00:11:26.098 { 00:11:26.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.098 "dma_device_type": 2 00:11:26.098 }, 00:11:26.098 { 00:11:26.098 "dma_device_id": "system", 00:11:26.098 "dma_device_type": 1 00:11:26.098 }, 00:11:26.098 { 00:11:26.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.098 "dma_device_type": 2 00:11:26.098 }, 00:11:26.098 { 00:11:26.098 "dma_device_id": "system", 00:11:26.098 "dma_device_type": 1 00:11:26.098 }, 00:11:26.098 { 00:11:26.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.098 "dma_device_type": 2 00:11:26.098 }, 00:11:26.098 { 00:11:26.098 "dma_device_id": "system", 00:11:26.098 "dma_device_type": 1 00:11:26.098 }, 00:11:26.098 { 00:11:26.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.098 "dma_device_type": 2 00:11:26.098 } 00:11:26.098 ], 00:11:26.098 "driver_specific": { 00:11:26.098 "raid": { 00:11:26.098 "uuid": "c452c327-95b8-4f57-a586-43bed6cb7468", 00:11:26.098 "strip_size_kb": 64, 00:11:26.098 "state": "online", 00:11:26.098 "raid_level": "raid0", 00:11:26.098 "superblock": true, 00:11:26.098 "num_base_bdevs": 4, 00:11:26.098 "num_base_bdevs_discovered": 4, 00:11:26.098 "num_base_bdevs_operational": 4, 00:11:26.098 "base_bdevs_list": [ 00:11:26.098 { 00:11:26.098 "name": "pt1", 00:11:26.098 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:26.098 "is_configured": true, 00:11:26.098 "data_offset": 2048, 00:11:26.098 "data_size": 63488 00:11:26.098 }, 00:11:26.098 { 00:11:26.098 "name": "pt2", 00:11:26.098 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:26.098 "is_configured": true, 00:11:26.098 "data_offset": 2048, 00:11:26.098 "data_size": 63488 00:11:26.098 }, 00:11:26.098 { 00:11:26.098 "name": "pt3", 00:11:26.099 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:26.099 "is_configured": true, 00:11:26.099 "data_offset": 2048, 00:11:26.099 "data_size": 63488 00:11:26.099 }, 00:11:26.099 { 00:11:26.099 "name": "pt4", 00:11:26.099 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:26.099 "is_configured": true, 00:11:26.099 "data_offset": 2048, 00:11:26.099 "data_size": 63488 00:11:26.099 } 00:11:26.099 ] 00:11:26.099 } 00:11:26.099 } 00:11:26.099 }' 00:11:26.099 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:26.099 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:26.099 pt2 00:11:26.099 pt3 00:11:26.099 pt4' 00:11:26.099 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.099 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:26.099 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.099 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:26.099 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.099 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.099 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.359 [2024-10-09 03:14:09.545615] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=c452c327-95b8-4f57-a586-43bed6cb7468 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z c452c327-95b8-4f57-a586-43bed6cb7468 ']' 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.359 [2024-10-09 03:14:09.589268] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:26.359 [2024-10-09 03:14:09.589370] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:26.359 [2024-10-09 03:14:09.589483] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:26.359 [2024-10-09 03:14:09.589575] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:26.359 [2024-10-09 03:14:09.589636] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.359 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.360 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.360 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:26.360 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:26.360 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:26.360 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:26.360 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.360 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.360 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.360 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:26.360 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:26.360 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.360 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.360 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.360 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:26.360 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:26.360 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.360 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.360 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.360 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:26.360 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:11:26.360 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.360 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.619 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.620 [2024-10-09 03:14:09.729036] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:26.620 [2024-10-09 03:14:09.731230] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:26.620 [2024-10-09 03:14:09.731324] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:26.620 [2024-10-09 03:14:09.731376] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:11:26.620 [2024-10-09 03:14:09.731450] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:26.620 [2024-10-09 03:14:09.731538] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:26.620 [2024-10-09 03:14:09.731592] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:26.620 [2024-10-09 03:14:09.731670] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:11:26.620 [2024-10-09 03:14:09.731717] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:26.620 [2024-10-09 03:14:09.731757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:26.620 request: 00:11:26.620 { 00:11:26.620 "name": "raid_bdev1", 00:11:26.620 "raid_level": "raid0", 00:11:26.620 "base_bdevs": [ 00:11:26.620 "malloc1", 00:11:26.620 "malloc2", 00:11:26.620 "malloc3", 00:11:26.620 "malloc4" 00:11:26.620 ], 00:11:26.620 "strip_size_kb": 64, 00:11:26.620 "superblock": false, 00:11:26.620 "method": "bdev_raid_create", 00:11:26.620 "req_id": 1 00:11:26.620 } 00:11:26.620 Got JSON-RPC error response 00:11:26.620 response: 00:11:26.620 { 00:11:26.620 "code": -17, 00:11:26.620 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:26.620 } 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.620 [2024-10-09 03:14:09.792987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:26.620 [2024-10-09 03:14:09.793106] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:26.620 [2024-10-09 03:14:09.793141] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:26.620 [2024-10-09 03:14:09.793171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:26.620 [2024-10-09 03:14:09.795647] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:26.620 [2024-10-09 03:14:09.795728] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:26.620 [2024-10-09 03:14:09.795856] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:26.620 [2024-10-09 03:14:09.795959] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:26.620 pt1 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.620 "name": "raid_bdev1", 00:11:26.620 "uuid": "c452c327-95b8-4f57-a586-43bed6cb7468", 00:11:26.620 "strip_size_kb": 64, 00:11:26.620 "state": "configuring", 00:11:26.620 "raid_level": "raid0", 00:11:26.620 "superblock": true, 00:11:26.620 "num_base_bdevs": 4, 00:11:26.620 "num_base_bdevs_discovered": 1, 00:11:26.620 "num_base_bdevs_operational": 4, 00:11:26.620 "base_bdevs_list": [ 00:11:26.620 { 00:11:26.620 "name": "pt1", 00:11:26.620 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:26.620 "is_configured": true, 00:11:26.620 "data_offset": 2048, 00:11:26.620 "data_size": 63488 00:11:26.620 }, 00:11:26.620 { 00:11:26.620 "name": null, 00:11:26.620 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:26.620 "is_configured": false, 00:11:26.620 "data_offset": 2048, 00:11:26.620 "data_size": 63488 00:11:26.620 }, 00:11:26.620 { 00:11:26.620 "name": null, 00:11:26.620 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:26.620 "is_configured": false, 00:11:26.620 "data_offset": 2048, 00:11:26.620 "data_size": 63488 00:11:26.620 }, 00:11:26.620 { 00:11:26.620 "name": null, 00:11:26.620 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:26.620 "is_configured": false, 00:11:26.620 "data_offset": 2048, 00:11:26.620 "data_size": 63488 00:11:26.620 } 00:11:26.620 ] 00:11:26.620 }' 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.620 03:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.189 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:11:27.189 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:27.189 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.189 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.189 [2024-10-09 03:14:10.233020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:27.189 [2024-10-09 03:14:10.233201] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.189 [2024-10-09 03:14:10.233228] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:11:27.189 [2024-10-09 03:14:10.233241] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.189 [2024-10-09 03:14:10.233788] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.189 [2024-10-09 03:14:10.233811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:27.189 [2024-10-09 03:14:10.233924] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:27.189 [2024-10-09 03:14:10.233953] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:27.189 pt2 00:11:27.189 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.189 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:27.189 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.189 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.189 [2024-10-09 03:14:10.241018] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:27.189 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.189 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:11:27.189 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.189 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.189 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:27.189 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.189 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.189 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.189 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.189 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.189 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.189 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.189 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.189 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.189 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.189 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.189 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.189 "name": "raid_bdev1", 00:11:27.189 "uuid": "c452c327-95b8-4f57-a586-43bed6cb7468", 00:11:27.190 "strip_size_kb": 64, 00:11:27.190 "state": "configuring", 00:11:27.190 "raid_level": "raid0", 00:11:27.190 "superblock": true, 00:11:27.190 "num_base_bdevs": 4, 00:11:27.190 "num_base_bdevs_discovered": 1, 00:11:27.190 "num_base_bdevs_operational": 4, 00:11:27.190 "base_bdevs_list": [ 00:11:27.190 { 00:11:27.190 "name": "pt1", 00:11:27.190 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:27.190 "is_configured": true, 00:11:27.190 "data_offset": 2048, 00:11:27.190 "data_size": 63488 00:11:27.190 }, 00:11:27.190 { 00:11:27.190 "name": null, 00:11:27.190 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:27.190 "is_configured": false, 00:11:27.190 "data_offset": 0, 00:11:27.190 "data_size": 63488 00:11:27.190 }, 00:11:27.190 { 00:11:27.190 "name": null, 00:11:27.190 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:27.190 "is_configured": false, 00:11:27.190 "data_offset": 2048, 00:11:27.190 "data_size": 63488 00:11:27.190 }, 00:11:27.190 { 00:11:27.190 "name": null, 00:11:27.190 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:27.190 "is_configured": false, 00:11:27.190 "data_offset": 2048, 00:11:27.190 "data_size": 63488 00:11:27.190 } 00:11:27.190 ] 00:11:27.190 }' 00:11:27.190 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.190 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.448 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:27.448 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:27.448 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:27.448 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.448 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.448 [2024-10-09 03:14:10.633062] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:27.448 [2024-10-09 03:14:10.633243] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.448 [2024-10-09 03:14:10.633285] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:27.448 [2024-10-09 03:14:10.633334] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.448 [2024-10-09 03:14:10.633913] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.448 [2024-10-09 03:14:10.633971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:27.448 [2024-10-09 03:14:10.634100] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:27.448 [2024-10-09 03:14:10.634155] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:27.448 pt2 00:11:27.448 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.448 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:27.448 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:27.448 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:27.449 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.449 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.449 [2024-10-09 03:14:10.644989] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:27.449 [2024-10-09 03:14:10.645084] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.449 [2024-10-09 03:14:10.645128] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:27.449 [2024-10-09 03:14:10.645173] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.449 [2024-10-09 03:14:10.645568] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.449 [2024-10-09 03:14:10.645592] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:27.449 [2024-10-09 03:14:10.645655] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:27.449 [2024-10-09 03:14:10.645673] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:27.449 pt3 00:11:27.449 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.449 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:27.449 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:27.449 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:11:27.449 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.449 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.449 [2024-10-09 03:14:10.656954] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:11:27.449 [2024-10-09 03:14:10.657041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:27.449 [2024-10-09 03:14:10.657076] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:11:27.449 [2024-10-09 03:14:10.657106] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:27.449 [2024-10-09 03:14:10.657479] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:27.449 [2024-10-09 03:14:10.657530] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:11:27.449 [2024-10-09 03:14:10.657617] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:11:27.449 [2024-10-09 03:14:10.657660] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:11:27.449 [2024-10-09 03:14:10.657811] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:27.449 [2024-10-09 03:14:10.657863] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:27.449 [2024-10-09 03:14:10.658151] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:27.449 [2024-10-09 03:14:10.658330] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:27.449 [2024-10-09 03:14:10.658375] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:27.449 [2024-10-09 03:14:10.658540] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:27.449 pt4 00:11:27.449 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.449 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:27.449 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:27.449 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:27.449 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:27.449 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:27.449 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:27.449 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:27.449 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:27.449 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.449 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.449 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.449 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.449 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.449 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:27.449 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:27.449 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:27.449 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:27.449 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.449 "name": "raid_bdev1", 00:11:27.449 "uuid": "c452c327-95b8-4f57-a586-43bed6cb7468", 00:11:27.449 "strip_size_kb": 64, 00:11:27.449 "state": "online", 00:11:27.449 "raid_level": "raid0", 00:11:27.449 "superblock": true, 00:11:27.449 "num_base_bdevs": 4, 00:11:27.449 "num_base_bdevs_discovered": 4, 00:11:27.449 "num_base_bdevs_operational": 4, 00:11:27.449 "base_bdevs_list": [ 00:11:27.449 { 00:11:27.449 "name": "pt1", 00:11:27.449 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:27.449 "is_configured": true, 00:11:27.449 "data_offset": 2048, 00:11:27.449 "data_size": 63488 00:11:27.449 }, 00:11:27.449 { 00:11:27.449 "name": "pt2", 00:11:27.449 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:27.449 "is_configured": true, 00:11:27.449 "data_offset": 2048, 00:11:27.449 "data_size": 63488 00:11:27.449 }, 00:11:27.449 { 00:11:27.449 "name": "pt3", 00:11:27.449 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:27.449 "is_configured": true, 00:11:27.449 "data_offset": 2048, 00:11:27.449 "data_size": 63488 00:11:27.449 }, 00:11:27.449 { 00:11:27.449 "name": "pt4", 00:11:27.449 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:27.449 "is_configured": true, 00:11:27.449 "data_offset": 2048, 00:11:27.449 "data_size": 63488 00:11:27.449 } 00:11:27.449 ] 00:11:27.449 }' 00:11:27.449 03:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.449 03:14:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.019 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:28.019 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:28.019 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:28.019 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:28.019 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:28.019 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:28.019 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:28.019 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:28.019 03:14:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.019 03:14:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.019 [2024-10-09 03:14:11.125379] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:28.019 03:14:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.019 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:28.019 "name": "raid_bdev1", 00:11:28.019 "aliases": [ 00:11:28.019 "c452c327-95b8-4f57-a586-43bed6cb7468" 00:11:28.019 ], 00:11:28.019 "product_name": "Raid Volume", 00:11:28.019 "block_size": 512, 00:11:28.019 "num_blocks": 253952, 00:11:28.019 "uuid": "c452c327-95b8-4f57-a586-43bed6cb7468", 00:11:28.019 "assigned_rate_limits": { 00:11:28.019 "rw_ios_per_sec": 0, 00:11:28.019 "rw_mbytes_per_sec": 0, 00:11:28.019 "r_mbytes_per_sec": 0, 00:11:28.019 "w_mbytes_per_sec": 0 00:11:28.019 }, 00:11:28.019 "claimed": false, 00:11:28.019 "zoned": false, 00:11:28.019 "supported_io_types": { 00:11:28.019 "read": true, 00:11:28.019 "write": true, 00:11:28.019 "unmap": true, 00:11:28.019 "flush": true, 00:11:28.019 "reset": true, 00:11:28.019 "nvme_admin": false, 00:11:28.019 "nvme_io": false, 00:11:28.019 "nvme_io_md": false, 00:11:28.019 "write_zeroes": true, 00:11:28.019 "zcopy": false, 00:11:28.019 "get_zone_info": false, 00:11:28.019 "zone_management": false, 00:11:28.019 "zone_append": false, 00:11:28.019 "compare": false, 00:11:28.019 "compare_and_write": false, 00:11:28.019 "abort": false, 00:11:28.019 "seek_hole": false, 00:11:28.019 "seek_data": false, 00:11:28.019 "copy": false, 00:11:28.019 "nvme_iov_md": false 00:11:28.019 }, 00:11:28.019 "memory_domains": [ 00:11:28.019 { 00:11:28.019 "dma_device_id": "system", 00:11:28.019 "dma_device_type": 1 00:11:28.019 }, 00:11:28.019 { 00:11:28.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.019 "dma_device_type": 2 00:11:28.019 }, 00:11:28.019 { 00:11:28.019 "dma_device_id": "system", 00:11:28.019 "dma_device_type": 1 00:11:28.019 }, 00:11:28.019 { 00:11:28.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.019 "dma_device_type": 2 00:11:28.019 }, 00:11:28.019 { 00:11:28.019 "dma_device_id": "system", 00:11:28.019 "dma_device_type": 1 00:11:28.019 }, 00:11:28.019 { 00:11:28.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.019 "dma_device_type": 2 00:11:28.019 }, 00:11:28.019 { 00:11:28.019 "dma_device_id": "system", 00:11:28.019 "dma_device_type": 1 00:11:28.019 }, 00:11:28.019 { 00:11:28.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.019 "dma_device_type": 2 00:11:28.019 } 00:11:28.019 ], 00:11:28.019 "driver_specific": { 00:11:28.019 "raid": { 00:11:28.019 "uuid": "c452c327-95b8-4f57-a586-43bed6cb7468", 00:11:28.019 "strip_size_kb": 64, 00:11:28.019 "state": "online", 00:11:28.019 "raid_level": "raid0", 00:11:28.019 "superblock": true, 00:11:28.019 "num_base_bdevs": 4, 00:11:28.019 "num_base_bdevs_discovered": 4, 00:11:28.019 "num_base_bdevs_operational": 4, 00:11:28.019 "base_bdevs_list": [ 00:11:28.019 { 00:11:28.019 "name": "pt1", 00:11:28.019 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:28.019 "is_configured": true, 00:11:28.019 "data_offset": 2048, 00:11:28.019 "data_size": 63488 00:11:28.019 }, 00:11:28.019 { 00:11:28.019 "name": "pt2", 00:11:28.019 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:28.019 "is_configured": true, 00:11:28.019 "data_offset": 2048, 00:11:28.019 "data_size": 63488 00:11:28.019 }, 00:11:28.019 { 00:11:28.019 "name": "pt3", 00:11:28.019 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:28.019 "is_configured": true, 00:11:28.019 "data_offset": 2048, 00:11:28.019 "data_size": 63488 00:11:28.019 }, 00:11:28.019 { 00:11:28.019 "name": "pt4", 00:11:28.019 "uuid": "00000000-0000-0000-0000-000000000004", 00:11:28.019 "is_configured": true, 00:11:28.019 "data_offset": 2048, 00:11:28.019 "data_size": 63488 00:11:28.019 } 00:11:28.019 ] 00:11:28.019 } 00:11:28.019 } 00:11:28.019 }' 00:11:28.019 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:28.019 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:28.019 pt2 00:11:28.019 pt3 00:11:28.019 pt4' 00:11:28.019 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.019 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:28.019 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.019 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:28.019 03:14:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.019 03:14:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.019 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.019 03:14:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.019 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.019 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.019 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.019 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:28.019 03:14:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.019 03:14:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.019 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.019 03:14:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.019 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.019 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.019 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.019 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:28.019 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.019 03:14:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.019 03:14:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.279 03:14:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.279 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.279 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.279 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.279 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:11:28.279 03:14:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.279 03:14:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.279 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.279 03:14:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.279 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.279 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.279 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:28.279 03:14:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.279 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:28.279 03:14:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:28.279 [2024-10-09 03:14:11.425234] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:28.279 03:14:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.279 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' c452c327-95b8-4f57-a586-43bed6cb7468 '!=' c452c327-95b8-4f57-a586-43bed6cb7468 ']' 00:11:28.279 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:11:28.279 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:28.279 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:28.279 03:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70869 00:11:28.279 03:14:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 70869 ']' 00:11:28.279 03:14:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 70869 00:11:28.279 03:14:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:11:28.279 03:14:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:28.279 03:14:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 70869 00:11:28.279 03:14:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:28.279 killing process with pid 70869 00:11:28.279 03:14:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:28.279 03:14:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 70869' 00:11:28.279 03:14:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 70869 00:11:28.279 [2024-10-09 03:14:11.510812] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:28.279 [2024-10-09 03:14:11.510947] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:28.279 [2024-10-09 03:14:11.511033] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:28.279 03:14:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 70869 00:11:28.279 [2024-10-09 03:14:11.511043] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:28.848 [2024-10-09 03:14:11.939171] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:30.228 03:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:30.228 00:11:30.228 real 0m5.662s 00:11:30.228 user 0m7.836s 00:11:30.228 sys 0m1.028s 00:11:30.228 ************************************ 00:11:30.228 END TEST raid_superblock_test 00:11:30.228 03:14:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:30.228 03:14:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.228 ************************************ 00:11:30.228 03:14:13 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:11:30.228 03:14:13 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:30.228 03:14:13 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:30.228 03:14:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:30.228 ************************************ 00:11:30.228 START TEST raid_read_error_test 00:11:30.228 ************************************ 00:11:30.228 03:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 read 00:11:30.228 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:30.228 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:30.228 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:30.228 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:30.228 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:30.228 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:30.228 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:30.228 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:30.228 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:30.228 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:30.228 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:30.228 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:30.228 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:30.228 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:30.228 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:30.228 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:30.229 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:30.229 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:30.229 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:30.229 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:30.229 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:30.229 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:30.229 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:30.229 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:30.229 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:30.229 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:30.229 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:30.229 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:30.229 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.y5d8ozcAFx 00:11:30.229 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71130 00:11:30.229 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71130 00:11:30.229 03:14:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:30.229 03:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 71130 ']' 00:11:30.229 03:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.229 03:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:30.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.229 03:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.229 03:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:30.229 03:14:13 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.229 [2024-10-09 03:14:13.479270] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:11:30.229 [2024-10-09 03:14:13.479375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71130 ] 00:11:30.488 [2024-10-09 03:14:13.642403] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:30.747 [2024-10-09 03:14:13.902812] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:31.006 [2024-10-09 03:14:14.127671] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:31.006 [2024-10-09 03:14:14.127721] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:31.006 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:31.006 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:31.006 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:31.006 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:31.006 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.006 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.267 BaseBdev1_malloc 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.267 true 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.267 [2024-10-09 03:14:14.350857] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:31.267 [2024-10-09 03:14:14.351020] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.267 [2024-10-09 03:14:14.351044] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:31.267 [2024-10-09 03:14:14.351056] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.267 [2024-10-09 03:14:14.353425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.267 [2024-10-09 03:14:14.353510] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:31.267 BaseBdev1 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.267 BaseBdev2_malloc 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.267 true 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.267 [2024-10-09 03:14:14.434163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:31.267 [2024-10-09 03:14:14.434285] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.267 [2024-10-09 03:14:14.434305] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:31.267 [2024-10-09 03:14:14.434317] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.267 [2024-10-09 03:14:14.436654] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.267 [2024-10-09 03:14:14.436695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:31.267 BaseBdev2 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.267 BaseBdev3_malloc 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.267 true 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.267 [2024-10-09 03:14:14.509676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:31.267 [2024-10-09 03:14:14.509818] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.267 [2024-10-09 03:14:14.509868] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:31.267 [2024-10-09 03:14:14.509901] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.267 [2024-10-09 03:14:14.512264] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.267 [2024-10-09 03:14:14.512340] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:31.267 BaseBdev3 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.267 BaseBdev4_malloc 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.267 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.527 true 00:11:31.527 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.527 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:31.527 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.527 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.527 [2024-10-09 03:14:14.582203] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:31.527 [2024-10-09 03:14:14.582326] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:31.527 [2024-10-09 03:14:14.582361] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:31.527 [2024-10-09 03:14:14.582393] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:31.527 [2024-10-09 03:14:14.584651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:31.527 [2024-10-09 03:14:14.584726] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:31.527 BaseBdev4 00:11:31.527 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.527 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:31.527 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.527 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.527 [2024-10-09 03:14:14.594255] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:31.527 [2024-10-09 03:14:14.596294] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:31.527 [2024-10-09 03:14:14.596413] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:31.527 [2024-10-09 03:14:14.596499] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:31.527 [2024-10-09 03:14:14.596749] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:31.527 [2024-10-09 03:14:14.596798] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:31.527 [2024-10-09 03:14:14.597086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:31.527 [2024-10-09 03:14:14.597288] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:31.527 [2024-10-09 03:14:14.597325] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:31.527 [2024-10-09 03:14:14.597505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:31.527 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.527 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:31.527 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:31.527 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:31.527 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:31.527 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:31.527 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:31.527 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.527 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.527 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.527 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.527 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.527 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.527 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.527 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.527 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.527 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.528 "name": "raid_bdev1", 00:11:31.528 "uuid": "8740ccff-3f14-4952-8dc9-4b5809e0f029", 00:11:31.528 "strip_size_kb": 64, 00:11:31.528 "state": "online", 00:11:31.528 "raid_level": "raid0", 00:11:31.528 "superblock": true, 00:11:31.528 "num_base_bdevs": 4, 00:11:31.528 "num_base_bdevs_discovered": 4, 00:11:31.528 "num_base_bdevs_operational": 4, 00:11:31.528 "base_bdevs_list": [ 00:11:31.528 { 00:11:31.528 "name": "BaseBdev1", 00:11:31.528 "uuid": "ccc1cea8-8935-5edd-989b-59438d0d85e9", 00:11:31.528 "is_configured": true, 00:11:31.528 "data_offset": 2048, 00:11:31.528 "data_size": 63488 00:11:31.528 }, 00:11:31.528 { 00:11:31.528 "name": "BaseBdev2", 00:11:31.528 "uuid": "9a115121-8623-5bc4-810c-14596ffc2e9f", 00:11:31.528 "is_configured": true, 00:11:31.528 "data_offset": 2048, 00:11:31.528 "data_size": 63488 00:11:31.528 }, 00:11:31.528 { 00:11:31.528 "name": "BaseBdev3", 00:11:31.528 "uuid": "b5b7db24-6e0e-5499-be79-66446f50ace6", 00:11:31.528 "is_configured": true, 00:11:31.528 "data_offset": 2048, 00:11:31.528 "data_size": 63488 00:11:31.528 }, 00:11:31.528 { 00:11:31.528 "name": "BaseBdev4", 00:11:31.528 "uuid": "7b9ac24f-ef28-55f1-ba2c-b93140f0bf8a", 00:11:31.528 "is_configured": true, 00:11:31.528 "data_offset": 2048, 00:11:31.528 "data_size": 63488 00:11:31.528 } 00:11:31.528 ] 00:11:31.528 }' 00:11:31.528 03:14:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.528 03:14:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:31.787 03:14:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:31.787 03:14:15 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:32.046 [2024-10-09 03:14:15.170741] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:32.986 03:14:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:32.986 03:14:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.986 03:14:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.986 03:14:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.986 03:14:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:32.986 03:14:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:32.986 03:14:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:32.986 03:14:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:32.986 03:14:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:32.986 03:14:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:32.986 03:14:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:32.986 03:14:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:32.986 03:14:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:32.986 03:14:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.986 03:14:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.986 03:14:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.986 03:14:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.986 03:14:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.986 03:14:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:32.986 03:14:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.986 03:14:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.986 03:14:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.986 03:14:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.986 "name": "raid_bdev1", 00:11:32.986 "uuid": "8740ccff-3f14-4952-8dc9-4b5809e0f029", 00:11:32.986 "strip_size_kb": 64, 00:11:32.986 "state": "online", 00:11:32.986 "raid_level": "raid0", 00:11:32.986 "superblock": true, 00:11:32.986 "num_base_bdevs": 4, 00:11:32.986 "num_base_bdevs_discovered": 4, 00:11:32.986 "num_base_bdevs_operational": 4, 00:11:32.986 "base_bdevs_list": [ 00:11:32.986 { 00:11:32.986 "name": "BaseBdev1", 00:11:32.986 "uuid": "ccc1cea8-8935-5edd-989b-59438d0d85e9", 00:11:32.986 "is_configured": true, 00:11:32.986 "data_offset": 2048, 00:11:32.986 "data_size": 63488 00:11:32.986 }, 00:11:32.986 { 00:11:32.986 "name": "BaseBdev2", 00:11:32.986 "uuid": "9a115121-8623-5bc4-810c-14596ffc2e9f", 00:11:32.986 "is_configured": true, 00:11:32.986 "data_offset": 2048, 00:11:32.986 "data_size": 63488 00:11:32.986 }, 00:11:32.986 { 00:11:32.986 "name": "BaseBdev3", 00:11:32.986 "uuid": "b5b7db24-6e0e-5499-be79-66446f50ace6", 00:11:32.986 "is_configured": true, 00:11:32.986 "data_offset": 2048, 00:11:32.986 "data_size": 63488 00:11:32.986 }, 00:11:32.986 { 00:11:32.986 "name": "BaseBdev4", 00:11:32.986 "uuid": "7b9ac24f-ef28-55f1-ba2c-b93140f0bf8a", 00:11:32.986 "is_configured": true, 00:11:32.986 "data_offset": 2048, 00:11:32.986 "data_size": 63488 00:11:32.986 } 00:11:32.986 ] 00:11:32.986 }' 00:11:32.986 03:14:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.986 03:14:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.247 03:14:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:33.247 03:14:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.247 03:14:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:33.247 [2024-10-09 03:14:16.539058] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:33.247 [2024-10-09 03:14:16.539192] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:33.247 [2024-10-09 03:14:16.541690] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:33.247 [2024-10-09 03:14:16.541798] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:33.247 [2024-10-09 03:14:16.541880] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:33.247 [2024-10-09 03:14:16.541933] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:33.247 03:14:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.247 03:14:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71130 00:11:33.247 03:14:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 71130 ']' 00:11:33.247 { 00:11:33.247 "results": [ 00:11:33.247 { 00:11:33.247 "job": "raid_bdev1", 00:11:33.247 "core_mask": "0x1", 00:11:33.247 "workload": "randrw", 00:11:33.247 "percentage": 50, 00:11:33.247 "status": "finished", 00:11:33.247 "queue_depth": 1, 00:11:33.247 "io_size": 131072, 00:11:33.247 "runtime": 1.368751, 00:11:33.247 "iops": 14101.176912382165, 00:11:33.247 "mibps": 1762.6471140477706, 00:11:33.247 "io_failed": 1, 00:11:33.247 "io_timeout": 0, 00:11:33.247 "avg_latency_us": 100.05877545553803, 00:11:33.247 "min_latency_us": 25.2646288209607, 00:11:33.247 "max_latency_us": 1280.6707423580785 00:11:33.247 } 00:11:33.247 ], 00:11:33.247 "core_count": 1 00:11:33.247 } 00:11:33.247 03:14:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 71130 00:11:33.247 03:14:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:11:33.507 03:14:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:33.507 03:14:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71130 00:11:33.507 03:14:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:33.507 03:14:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:33.507 03:14:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71130' 00:11:33.507 killing process with pid 71130 00:11:33.507 03:14:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 71130 00:11:33.507 [2024-10-09 03:14:16.572373] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:33.507 03:14:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 71130 00:11:33.767 [2024-10-09 03:14:16.916129] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:35.159 03:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.y5d8ozcAFx 00:11:35.159 03:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:35.159 03:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:35.159 03:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:35.159 03:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:35.159 ************************************ 00:11:35.159 END TEST raid_read_error_test 00:11:35.159 ************************************ 00:11:35.159 03:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:35.159 03:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:35.159 03:14:18 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:35.159 00:11:35.159 real 0m4.974s 00:11:35.159 user 0m5.708s 00:11:35.159 sys 0m0.693s 00:11:35.159 03:14:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:35.159 03:14:18 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.159 03:14:18 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:11:35.159 03:14:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:35.159 03:14:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:35.159 03:14:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:35.159 ************************************ 00:11:35.159 START TEST raid_write_error_test 00:11:35.159 ************************************ 00:11:35.159 03:14:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid0 4 write 00:11:35.159 03:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:11:35.159 03:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:11:35.159 03:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:35.159 03:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:35.159 03:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:35.159 03:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:35.159 03:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:35.159 03:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:35.159 03:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:35.159 03:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:35.159 03:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:35.159 03:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:35.159 03:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:35.159 03:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:35.159 03:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:11:35.159 03:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:35.159 03:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:35.159 03:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:35.159 03:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:35.159 03:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:35.159 03:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:35.159 03:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:35.159 03:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:35.159 03:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:35.159 03:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:11:35.159 03:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:35.159 03:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:35.159 03:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:35.159 03:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.onQ7KvltcT 00:11:35.159 03:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71276 00:11:35.159 03:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:35.159 03:14:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71276 00:11:35.159 03:14:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 71276 ']' 00:11:35.159 03:14:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.159 03:14:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:35.160 03:14:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.160 03:14:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:35.160 03:14:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:35.419 [2024-10-09 03:14:18.522031] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:11:35.419 [2024-10-09 03:14:18.522227] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71276 ] 00:11:35.419 [2024-10-09 03:14:18.687416] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.685 [2024-10-09 03:14:18.898852] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.955 [2024-10-09 03:14:19.108435] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:35.955 [2024-10-09 03:14:19.108555] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:36.215 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:36.215 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:11:36.215 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:36.215 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:36.215 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.215 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.215 BaseBdev1_malloc 00:11:36.215 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.215 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:36.215 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.215 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.215 true 00:11:36.215 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.215 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:36.215 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.215 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.215 [2024-10-09 03:14:19.423351] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:36.215 [2024-10-09 03:14:19.423473] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.215 [2024-10-09 03:14:19.423510] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:36.215 [2024-10-09 03:14:19.423542] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.215 [2024-10-09 03:14:19.425691] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.215 [2024-10-09 03:14:19.425777] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:36.215 BaseBdev1 00:11:36.215 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.215 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:36.215 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:36.215 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.215 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.215 BaseBdev2_malloc 00:11:36.215 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.215 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:36.215 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.215 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.215 true 00:11:36.215 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.215 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:36.215 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.215 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.215 [2024-10-09 03:14:19.502688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:36.215 [2024-10-09 03:14:19.502745] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.215 [2024-10-09 03:14:19.502762] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:36.215 [2024-10-09 03:14:19.502772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.215 [2024-10-09 03:14:19.504950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.215 [2024-10-09 03:14:19.505046] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:36.215 BaseBdev2 00:11:36.215 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.215 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:36.215 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:36.215 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.215 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.475 BaseBdev3_malloc 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.475 true 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.475 [2024-10-09 03:14:19.571298] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:36.475 [2024-10-09 03:14:19.571409] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.475 [2024-10-09 03:14:19.571445] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:36.475 [2024-10-09 03:14:19.571475] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.475 [2024-10-09 03:14:19.573683] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.475 [2024-10-09 03:14:19.573774] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:36.475 BaseBdev3 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.475 BaseBdev4_malloc 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.475 true 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.475 [2024-10-09 03:14:19.640335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:11:36.475 [2024-10-09 03:14:19.640386] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:36.475 [2024-10-09 03:14:19.640403] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:36.475 [2024-10-09 03:14:19.640415] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:36.475 [2024-10-09 03:14:19.642553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:36.475 [2024-10-09 03:14:19.642596] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:11:36.475 BaseBdev4 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.475 [2024-10-09 03:14:19.652379] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.475 [2024-10-09 03:14:19.654224] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:36.475 [2024-10-09 03:14:19.654350] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:36.475 [2024-10-09 03:14:19.654450] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:36.475 [2024-10-09 03:14:19.654745] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:11:36.475 [2024-10-09 03:14:19.654810] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:36.475 [2024-10-09 03:14:19.655093] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:36.475 [2024-10-09 03:14:19.655288] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:11:36.475 [2024-10-09 03:14:19.655327] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:11:36.475 [2024-10-09 03:14:19.655538] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.475 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:36.475 "name": "raid_bdev1", 00:11:36.475 "uuid": "7c5af380-c6b2-48cb-b069-a7a72c05878e", 00:11:36.475 "strip_size_kb": 64, 00:11:36.475 "state": "online", 00:11:36.475 "raid_level": "raid0", 00:11:36.475 "superblock": true, 00:11:36.475 "num_base_bdevs": 4, 00:11:36.475 "num_base_bdevs_discovered": 4, 00:11:36.475 "num_base_bdevs_operational": 4, 00:11:36.475 "base_bdevs_list": [ 00:11:36.475 { 00:11:36.475 "name": "BaseBdev1", 00:11:36.475 "uuid": "ba5209a9-4965-548f-b896-02b0e5de9e0d", 00:11:36.475 "is_configured": true, 00:11:36.475 "data_offset": 2048, 00:11:36.475 "data_size": 63488 00:11:36.475 }, 00:11:36.475 { 00:11:36.475 "name": "BaseBdev2", 00:11:36.475 "uuid": "b3b2acd9-512e-5ee9-bd6e-e3517355bfd3", 00:11:36.475 "is_configured": true, 00:11:36.476 "data_offset": 2048, 00:11:36.476 "data_size": 63488 00:11:36.476 }, 00:11:36.476 { 00:11:36.476 "name": "BaseBdev3", 00:11:36.476 "uuid": "1a2e7849-0ed7-5336-83ab-7709c6f33c5d", 00:11:36.476 "is_configured": true, 00:11:36.476 "data_offset": 2048, 00:11:36.476 "data_size": 63488 00:11:36.476 }, 00:11:36.476 { 00:11:36.476 "name": "BaseBdev4", 00:11:36.476 "uuid": "6784bb03-6306-521b-87d9-33058c93fa1c", 00:11:36.476 "is_configured": true, 00:11:36.476 "data_offset": 2048, 00:11:36.476 "data_size": 63488 00:11:36.476 } 00:11:36.476 ] 00:11:36.476 }' 00:11:36.476 03:14:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:36.476 03:14:19 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.044 03:14:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:37.045 03:14:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:37.045 [2024-10-09 03:14:20.205270] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:37.984 03:14:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:37.984 03:14:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.984 03:14:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.984 03:14:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.984 03:14:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:37.984 03:14:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:11:37.984 03:14:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:11:37.984 03:14:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:11:37.984 03:14:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:37.984 03:14:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:37.984 03:14:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:37.984 03:14:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:37.984 03:14:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:37.984 03:14:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.984 03:14:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.984 03:14:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.984 03:14:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.984 03:14:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.984 03:14:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.984 03:14:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.984 03:14:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.984 03:14:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.984 03:14:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.984 "name": "raid_bdev1", 00:11:37.984 "uuid": "7c5af380-c6b2-48cb-b069-a7a72c05878e", 00:11:37.984 "strip_size_kb": 64, 00:11:37.984 "state": "online", 00:11:37.984 "raid_level": "raid0", 00:11:37.984 "superblock": true, 00:11:37.984 "num_base_bdevs": 4, 00:11:37.984 "num_base_bdevs_discovered": 4, 00:11:37.984 "num_base_bdevs_operational": 4, 00:11:37.984 "base_bdevs_list": [ 00:11:37.984 { 00:11:37.984 "name": "BaseBdev1", 00:11:37.984 "uuid": "ba5209a9-4965-548f-b896-02b0e5de9e0d", 00:11:37.984 "is_configured": true, 00:11:37.984 "data_offset": 2048, 00:11:37.984 "data_size": 63488 00:11:37.984 }, 00:11:37.984 { 00:11:37.984 "name": "BaseBdev2", 00:11:37.984 "uuid": "b3b2acd9-512e-5ee9-bd6e-e3517355bfd3", 00:11:37.984 "is_configured": true, 00:11:37.984 "data_offset": 2048, 00:11:37.984 "data_size": 63488 00:11:37.984 }, 00:11:37.984 { 00:11:37.984 "name": "BaseBdev3", 00:11:37.984 "uuid": "1a2e7849-0ed7-5336-83ab-7709c6f33c5d", 00:11:37.984 "is_configured": true, 00:11:37.984 "data_offset": 2048, 00:11:37.984 "data_size": 63488 00:11:37.984 }, 00:11:37.984 { 00:11:37.984 "name": "BaseBdev4", 00:11:37.984 "uuid": "6784bb03-6306-521b-87d9-33058c93fa1c", 00:11:37.984 "is_configured": true, 00:11:37.984 "data_offset": 2048, 00:11:37.984 "data_size": 63488 00:11:37.984 } 00:11:37.984 ] 00:11:37.984 }' 00:11:37.985 03:14:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.985 03:14:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.555 03:14:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:38.555 03:14:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.555 03:14:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.555 [2024-10-09 03:14:21.573310] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:38.555 [2024-10-09 03:14:21.573456] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:38.555 [2024-10-09 03:14:21.576408] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:38.555 [2024-10-09 03:14:21.576556] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:38.555 [2024-10-09 03:14:21.576653] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:38.555 [2024-10-09 03:14:21.576725] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:11:38.555 { 00:11:38.555 "results": [ 00:11:38.555 { 00:11:38.555 "job": "raid_bdev1", 00:11:38.555 "core_mask": "0x1", 00:11:38.555 "workload": "randrw", 00:11:38.555 "percentage": 50, 00:11:38.555 "status": "finished", 00:11:38.555 "queue_depth": 1, 00:11:38.555 "io_size": 131072, 00:11:38.555 "runtime": 1.368007, 00:11:38.555 "iops": 11535.759685440205, 00:11:38.555 "mibps": 1441.9699606800257, 00:11:38.555 "io_failed": 1, 00:11:38.555 "io_timeout": 0, 00:11:38.555 "avg_latency_us": 122.03776188560401, 00:11:38.555 "min_latency_us": 27.612227074235808, 00:11:38.555 "max_latency_us": 1616.9362445414847 00:11:38.555 } 00:11:38.555 ], 00:11:38.555 "core_count": 1 00:11:38.555 } 00:11:38.555 03:14:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.555 03:14:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71276 00:11:38.555 03:14:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 71276 ']' 00:11:38.555 03:14:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 71276 00:11:38.555 03:14:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:11:38.555 03:14:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:38.555 03:14:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71276 00:11:38.555 03:14:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:38.555 03:14:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:38.555 03:14:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71276' 00:11:38.555 killing process with pid 71276 00:11:38.555 03:14:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 71276 00:11:38.555 [2024-10-09 03:14:21.624408] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:38.555 03:14:21 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 71276 00:11:38.815 [2024-10-09 03:14:22.037297] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:40.724 03:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.onQ7KvltcT 00:11:40.724 03:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:40.724 03:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:40.724 03:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:11:40.724 03:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:11:40.724 ************************************ 00:11:40.724 END TEST raid_write_error_test 00:11:40.724 ************************************ 00:11:40.724 03:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:40.724 03:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:40.724 03:14:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:11:40.724 00:11:40.724 real 0m5.289s 00:11:40.724 user 0m6.099s 00:11:40.724 sys 0m0.631s 00:11:40.724 03:14:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:40.724 03:14:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.724 03:14:23 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:40.724 03:14:23 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:11:40.724 03:14:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:40.724 03:14:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:40.724 03:14:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:40.724 ************************************ 00:11:40.724 START TEST raid_state_function_test 00:11:40.725 ************************************ 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 false 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71431 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71431' 00:11:40.725 Process raid pid: 71431 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71431 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 71431 ']' 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:40.725 03:14:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.725 [2024-10-09 03:14:23.877446] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:11:40.725 [2024-10-09 03:14:23.877610] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.985 [2024-10-09 03:14:24.044642] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:41.244 [2024-10-09 03:14:24.339872] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.504 [2024-10-09 03:14:24.614348] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:41.504 [2024-10-09 03:14:24.614407] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:41.504 03:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:41.504 03:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:11:41.504 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:41.764 03:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.764 03:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.764 [2024-10-09 03:14:24.814444] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:41.764 [2024-10-09 03:14:24.814647] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:41.764 [2024-10-09 03:14:24.814702] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:41.764 [2024-10-09 03:14:24.814738] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:41.764 [2024-10-09 03:14:24.814765] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:41.764 [2024-10-09 03:14:24.814805] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:41.764 [2024-10-09 03:14:24.814851] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:41.764 [2024-10-09 03:14:24.814893] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:41.764 03:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.764 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:41.764 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:41.764 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.764 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:41.764 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:41.764 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:41.764 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.764 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.764 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.764 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.764 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.764 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.764 03:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:41.764 03:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.764 03:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:41.764 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.764 "name": "Existed_Raid", 00:11:41.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.764 "strip_size_kb": 64, 00:11:41.764 "state": "configuring", 00:11:41.764 "raid_level": "concat", 00:11:41.764 "superblock": false, 00:11:41.764 "num_base_bdevs": 4, 00:11:41.764 "num_base_bdevs_discovered": 0, 00:11:41.764 "num_base_bdevs_operational": 4, 00:11:41.764 "base_bdevs_list": [ 00:11:41.764 { 00:11:41.764 "name": "BaseBdev1", 00:11:41.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.764 "is_configured": false, 00:11:41.764 "data_offset": 0, 00:11:41.764 "data_size": 0 00:11:41.764 }, 00:11:41.764 { 00:11:41.764 "name": "BaseBdev2", 00:11:41.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.764 "is_configured": false, 00:11:41.764 "data_offset": 0, 00:11:41.764 "data_size": 0 00:11:41.764 }, 00:11:41.764 { 00:11:41.764 "name": "BaseBdev3", 00:11:41.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.764 "is_configured": false, 00:11:41.764 "data_offset": 0, 00:11:41.764 "data_size": 0 00:11:41.764 }, 00:11:41.764 { 00:11:41.764 "name": "BaseBdev4", 00:11:41.764 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.764 "is_configured": false, 00:11:41.764 "data_offset": 0, 00:11:41.764 "data_size": 0 00:11:41.764 } 00:11:41.764 ] 00:11:41.764 }' 00:11:41.764 03:14:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.764 03:14:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.023 03:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:42.023 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.024 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.024 [2024-10-09 03:14:25.265540] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:42.024 [2024-10-09 03:14:25.265676] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:42.024 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.024 03:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:42.024 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.024 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.024 [2024-10-09 03:14:25.277488] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:42.024 [2024-10-09 03:14:25.277586] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:42.024 [2024-10-09 03:14:25.277614] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:42.024 [2024-10-09 03:14:25.277637] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:42.024 [2024-10-09 03:14:25.277656] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:42.024 [2024-10-09 03:14:25.277678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:42.024 [2024-10-09 03:14:25.277695] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:42.024 [2024-10-09 03:14:25.277717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:42.024 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.024 03:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:42.024 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.024 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.284 [2024-10-09 03:14:25.341199] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:42.284 BaseBdev1 00:11:42.284 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.284 03:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:42.284 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:42.284 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:42.284 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:42.284 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:42.284 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:42.284 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:42.284 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.284 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.284 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.284 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:42.284 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.284 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.284 [ 00:11:42.284 { 00:11:42.284 "name": "BaseBdev1", 00:11:42.284 "aliases": [ 00:11:42.284 "7b4f80c2-c5ab-44a9-8c77-9bcaf2f9d591" 00:11:42.284 ], 00:11:42.284 "product_name": "Malloc disk", 00:11:42.284 "block_size": 512, 00:11:42.284 "num_blocks": 65536, 00:11:42.284 "uuid": "7b4f80c2-c5ab-44a9-8c77-9bcaf2f9d591", 00:11:42.284 "assigned_rate_limits": { 00:11:42.284 "rw_ios_per_sec": 0, 00:11:42.284 "rw_mbytes_per_sec": 0, 00:11:42.284 "r_mbytes_per_sec": 0, 00:11:42.284 "w_mbytes_per_sec": 0 00:11:42.284 }, 00:11:42.284 "claimed": true, 00:11:42.284 "claim_type": "exclusive_write", 00:11:42.284 "zoned": false, 00:11:42.284 "supported_io_types": { 00:11:42.284 "read": true, 00:11:42.284 "write": true, 00:11:42.284 "unmap": true, 00:11:42.284 "flush": true, 00:11:42.284 "reset": true, 00:11:42.284 "nvme_admin": false, 00:11:42.284 "nvme_io": false, 00:11:42.284 "nvme_io_md": false, 00:11:42.284 "write_zeroes": true, 00:11:42.284 "zcopy": true, 00:11:42.284 "get_zone_info": false, 00:11:42.284 "zone_management": false, 00:11:42.284 "zone_append": false, 00:11:42.284 "compare": false, 00:11:42.284 "compare_and_write": false, 00:11:42.284 "abort": true, 00:11:42.284 "seek_hole": false, 00:11:42.284 "seek_data": false, 00:11:42.284 "copy": true, 00:11:42.284 "nvme_iov_md": false 00:11:42.284 }, 00:11:42.284 "memory_domains": [ 00:11:42.284 { 00:11:42.284 "dma_device_id": "system", 00:11:42.284 "dma_device_type": 1 00:11:42.284 }, 00:11:42.284 { 00:11:42.284 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:42.284 "dma_device_type": 2 00:11:42.284 } 00:11:42.284 ], 00:11:42.284 "driver_specific": {} 00:11:42.284 } 00:11:42.284 ] 00:11:42.284 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.284 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:42.284 03:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:42.284 03:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.284 03:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.284 03:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:42.284 03:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.284 03:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.284 03:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.284 03:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.284 03:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.284 03:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.284 03:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.284 03:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.284 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.284 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.284 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.284 03:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.284 "name": "Existed_Raid", 00:11:42.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.284 "strip_size_kb": 64, 00:11:42.284 "state": "configuring", 00:11:42.284 "raid_level": "concat", 00:11:42.284 "superblock": false, 00:11:42.284 "num_base_bdevs": 4, 00:11:42.284 "num_base_bdevs_discovered": 1, 00:11:42.284 "num_base_bdevs_operational": 4, 00:11:42.284 "base_bdevs_list": [ 00:11:42.284 { 00:11:42.284 "name": "BaseBdev1", 00:11:42.284 "uuid": "7b4f80c2-c5ab-44a9-8c77-9bcaf2f9d591", 00:11:42.284 "is_configured": true, 00:11:42.284 "data_offset": 0, 00:11:42.284 "data_size": 65536 00:11:42.284 }, 00:11:42.284 { 00:11:42.284 "name": "BaseBdev2", 00:11:42.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.284 "is_configured": false, 00:11:42.284 "data_offset": 0, 00:11:42.284 "data_size": 0 00:11:42.284 }, 00:11:42.284 { 00:11:42.284 "name": "BaseBdev3", 00:11:42.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.284 "is_configured": false, 00:11:42.284 "data_offset": 0, 00:11:42.284 "data_size": 0 00:11:42.284 }, 00:11:42.284 { 00:11:42.284 "name": "BaseBdev4", 00:11:42.284 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.284 "is_configured": false, 00:11:42.284 "data_offset": 0, 00:11:42.284 "data_size": 0 00:11:42.284 } 00:11:42.284 ] 00:11:42.284 }' 00:11:42.284 03:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.284 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.544 03:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:42.544 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.544 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.544 [2024-10-09 03:14:25.820481] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:42.544 [2024-10-09 03:14:25.820568] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:42.544 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.544 03:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:42.544 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.544 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.544 [2024-10-09 03:14:25.832517] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:42.544 [2024-10-09 03:14:25.834567] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:42.544 [2024-10-09 03:14:25.834644] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:42.544 [2024-10-09 03:14:25.834672] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:42.544 [2024-10-09 03:14:25.834695] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:42.544 [2024-10-09 03:14:25.834714] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:42.544 [2024-10-09 03:14:25.834733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:42.544 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.544 03:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:42.544 03:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:42.545 03:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:42.545 03:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:42.545 03:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.545 03:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:42.545 03:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:42.545 03:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:42.545 03:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.545 03:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.545 03:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.545 03:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.545 03:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:42.545 03:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.545 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.545 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.813 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.813 03:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.813 "name": "Existed_Raid", 00:11:42.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.813 "strip_size_kb": 64, 00:11:42.813 "state": "configuring", 00:11:42.813 "raid_level": "concat", 00:11:42.813 "superblock": false, 00:11:42.813 "num_base_bdevs": 4, 00:11:42.813 "num_base_bdevs_discovered": 1, 00:11:42.813 "num_base_bdevs_operational": 4, 00:11:42.813 "base_bdevs_list": [ 00:11:42.813 { 00:11:42.813 "name": "BaseBdev1", 00:11:42.813 "uuid": "7b4f80c2-c5ab-44a9-8c77-9bcaf2f9d591", 00:11:42.813 "is_configured": true, 00:11:42.813 "data_offset": 0, 00:11:42.813 "data_size": 65536 00:11:42.813 }, 00:11:42.813 { 00:11:42.813 "name": "BaseBdev2", 00:11:42.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.813 "is_configured": false, 00:11:42.813 "data_offset": 0, 00:11:42.813 "data_size": 0 00:11:42.813 }, 00:11:42.813 { 00:11:42.813 "name": "BaseBdev3", 00:11:42.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.813 "is_configured": false, 00:11:42.813 "data_offset": 0, 00:11:42.813 "data_size": 0 00:11:42.813 }, 00:11:42.813 { 00:11:42.813 "name": "BaseBdev4", 00:11:42.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.813 "is_configured": false, 00:11:42.813 "data_offset": 0, 00:11:42.813 "data_size": 0 00:11:42.813 } 00:11:42.813 ] 00:11:42.813 }' 00:11:42.813 03:14:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.813 03:14:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.072 03:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:43.072 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.072 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.072 [2024-10-09 03:14:26.325153] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:43.072 BaseBdev2 00:11:43.072 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.072 03:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:43.072 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:43.073 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:43.073 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:43.073 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:43.073 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:43.073 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:43.073 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.073 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.073 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.073 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:43.073 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.073 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.073 [ 00:11:43.073 { 00:11:43.073 "name": "BaseBdev2", 00:11:43.073 "aliases": [ 00:11:43.073 "5b57d10f-500e-405d-bbec-96bb50a7dfa7" 00:11:43.073 ], 00:11:43.073 "product_name": "Malloc disk", 00:11:43.073 "block_size": 512, 00:11:43.073 "num_blocks": 65536, 00:11:43.073 "uuid": "5b57d10f-500e-405d-bbec-96bb50a7dfa7", 00:11:43.073 "assigned_rate_limits": { 00:11:43.073 "rw_ios_per_sec": 0, 00:11:43.073 "rw_mbytes_per_sec": 0, 00:11:43.073 "r_mbytes_per_sec": 0, 00:11:43.073 "w_mbytes_per_sec": 0 00:11:43.073 }, 00:11:43.073 "claimed": true, 00:11:43.073 "claim_type": "exclusive_write", 00:11:43.073 "zoned": false, 00:11:43.073 "supported_io_types": { 00:11:43.073 "read": true, 00:11:43.073 "write": true, 00:11:43.073 "unmap": true, 00:11:43.073 "flush": true, 00:11:43.073 "reset": true, 00:11:43.073 "nvme_admin": false, 00:11:43.073 "nvme_io": false, 00:11:43.073 "nvme_io_md": false, 00:11:43.073 "write_zeroes": true, 00:11:43.073 "zcopy": true, 00:11:43.073 "get_zone_info": false, 00:11:43.073 "zone_management": false, 00:11:43.073 "zone_append": false, 00:11:43.073 "compare": false, 00:11:43.073 "compare_and_write": false, 00:11:43.073 "abort": true, 00:11:43.073 "seek_hole": false, 00:11:43.073 "seek_data": false, 00:11:43.073 "copy": true, 00:11:43.073 "nvme_iov_md": false 00:11:43.073 }, 00:11:43.073 "memory_domains": [ 00:11:43.073 { 00:11:43.073 "dma_device_id": "system", 00:11:43.073 "dma_device_type": 1 00:11:43.073 }, 00:11:43.073 { 00:11:43.073 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.073 "dma_device_type": 2 00:11:43.073 } 00:11:43.073 ], 00:11:43.073 "driver_specific": {} 00:11:43.073 } 00:11:43.073 ] 00:11:43.073 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.073 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:43.073 03:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:43.073 03:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:43.073 03:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:43.073 03:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.073 03:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.073 03:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:43.073 03:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:43.073 03:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.073 03:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.073 03:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.073 03:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.073 03:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.073 03:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.073 03:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.073 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.073 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.332 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.332 03:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.332 "name": "Existed_Raid", 00:11:43.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.332 "strip_size_kb": 64, 00:11:43.332 "state": "configuring", 00:11:43.332 "raid_level": "concat", 00:11:43.332 "superblock": false, 00:11:43.332 "num_base_bdevs": 4, 00:11:43.332 "num_base_bdevs_discovered": 2, 00:11:43.332 "num_base_bdevs_operational": 4, 00:11:43.332 "base_bdevs_list": [ 00:11:43.332 { 00:11:43.332 "name": "BaseBdev1", 00:11:43.332 "uuid": "7b4f80c2-c5ab-44a9-8c77-9bcaf2f9d591", 00:11:43.332 "is_configured": true, 00:11:43.332 "data_offset": 0, 00:11:43.332 "data_size": 65536 00:11:43.332 }, 00:11:43.332 { 00:11:43.332 "name": "BaseBdev2", 00:11:43.332 "uuid": "5b57d10f-500e-405d-bbec-96bb50a7dfa7", 00:11:43.332 "is_configured": true, 00:11:43.332 "data_offset": 0, 00:11:43.332 "data_size": 65536 00:11:43.332 }, 00:11:43.332 { 00:11:43.332 "name": "BaseBdev3", 00:11:43.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.332 "is_configured": false, 00:11:43.332 "data_offset": 0, 00:11:43.332 "data_size": 0 00:11:43.332 }, 00:11:43.332 { 00:11:43.332 "name": "BaseBdev4", 00:11:43.332 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.332 "is_configured": false, 00:11:43.332 "data_offset": 0, 00:11:43.332 "data_size": 0 00:11:43.332 } 00:11:43.332 ] 00:11:43.332 }' 00:11:43.332 03:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.332 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.592 03:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:43.592 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.592 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.592 [2024-10-09 03:14:26.828582] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:43.592 BaseBdev3 00:11:43.592 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.592 03:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:43.592 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:43.592 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:43.592 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:43.592 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:43.592 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:43.592 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:43.592 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.592 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.592 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.592 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:43.592 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.592 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.592 [ 00:11:43.592 { 00:11:43.592 "name": "BaseBdev3", 00:11:43.592 "aliases": [ 00:11:43.592 "e8fd5025-5d9d-4bfb-877c-6a7a21385eb5" 00:11:43.592 ], 00:11:43.592 "product_name": "Malloc disk", 00:11:43.592 "block_size": 512, 00:11:43.592 "num_blocks": 65536, 00:11:43.592 "uuid": "e8fd5025-5d9d-4bfb-877c-6a7a21385eb5", 00:11:43.592 "assigned_rate_limits": { 00:11:43.592 "rw_ios_per_sec": 0, 00:11:43.592 "rw_mbytes_per_sec": 0, 00:11:43.592 "r_mbytes_per_sec": 0, 00:11:43.592 "w_mbytes_per_sec": 0 00:11:43.592 }, 00:11:43.592 "claimed": true, 00:11:43.592 "claim_type": "exclusive_write", 00:11:43.592 "zoned": false, 00:11:43.592 "supported_io_types": { 00:11:43.592 "read": true, 00:11:43.592 "write": true, 00:11:43.592 "unmap": true, 00:11:43.592 "flush": true, 00:11:43.592 "reset": true, 00:11:43.592 "nvme_admin": false, 00:11:43.592 "nvme_io": false, 00:11:43.592 "nvme_io_md": false, 00:11:43.592 "write_zeroes": true, 00:11:43.592 "zcopy": true, 00:11:43.592 "get_zone_info": false, 00:11:43.592 "zone_management": false, 00:11:43.592 "zone_append": false, 00:11:43.592 "compare": false, 00:11:43.592 "compare_and_write": false, 00:11:43.592 "abort": true, 00:11:43.592 "seek_hole": false, 00:11:43.592 "seek_data": false, 00:11:43.592 "copy": true, 00:11:43.592 "nvme_iov_md": false 00:11:43.592 }, 00:11:43.592 "memory_domains": [ 00:11:43.592 { 00:11:43.592 "dma_device_id": "system", 00:11:43.592 "dma_device_type": 1 00:11:43.592 }, 00:11:43.592 { 00:11:43.592 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.592 "dma_device_type": 2 00:11:43.592 } 00:11:43.592 ], 00:11:43.592 "driver_specific": {} 00:11:43.592 } 00:11:43.592 ] 00:11:43.592 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.592 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:43.592 03:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:43.592 03:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:43.592 03:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:43.592 03:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:43.592 03:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:43.592 03:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:43.592 03:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:43.592 03:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:43.592 03:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.592 03:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.592 03:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.592 03:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.593 03:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.593 03:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:43.593 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.593 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.593 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.852 03:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.852 "name": "Existed_Raid", 00:11:43.852 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.852 "strip_size_kb": 64, 00:11:43.852 "state": "configuring", 00:11:43.852 "raid_level": "concat", 00:11:43.852 "superblock": false, 00:11:43.852 "num_base_bdevs": 4, 00:11:43.852 "num_base_bdevs_discovered": 3, 00:11:43.852 "num_base_bdevs_operational": 4, 00:11:43.853 "base_bdevs_list": [ 00:11:43.853 { 00:11:43.853 "name": "BaseBdev1", 00:11:43.853 "uuid": "7b4f80c2-c5ab-44a9-8c77-9bcaf2f9d591", 00:11:43.853 "is_configured": true, 00:11:43.853 "data_offset": 0, 00:11:43.853 "data_size": 65536 00:11:43.853 }, 00:11:43.853 { 00:11:43.853 "name": "BaseBdev2", 00:11:43.853 "uuid": "5b57d10f-500e-405d-bbec-96bb50a7dfa7", 00:11:43.853 "is_configured": true, 00:11:43.853 "data_offset": 0, 00:11:43.853 "data_size": 65536 00:11:43.853 }, 00:11:43.853 { 00:11:43.853 "name": "BaseBdev3", 00:11:43.853 "uuid": "e8fd5025-5d9d-4bfb-877c-6a7a21385eb5", 00:11:43.853 "is_configured": true, 00:11:43.853 "data_offset": 0, 00:11:43.853 "data_size": 65536 00:11:43.853 }, 00:11:43.853 { 00:11:43.853 "name": "BaseBdev4", 00:11:43.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.853 "is_configured": false, 00:11:43.853 "data_offset": 0, 00:11:43.853 "data_size": 0 00:11:43.853 } 00:11:43.853 ] 00:11:43.853 }' 00:11:43.853 03:14:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.853 03:14:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.115 03:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:44.115 03:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.115 03:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.115 [2024-10-09 03:14:27.376244] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:44.115 [2024-10-09 03:14:27.376410] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:44.115 [2024-10-09 03:14:27.376427] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:44.115 [2024-10-09 03:14:27.376818] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:44.115 [2024-10-09 03:14:27.377088] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:44.115 [2024-10-09 03:14:27.377105] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:44.115 BaseBdev4 00:11:44.115 [2024-10-09 03:14:27.377469] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:44.115 03:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.115 03:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:44.115 03:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:44.115 03:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:44.115 03:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:44.115 03:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:44.115 03:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:44.115 03:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:44.115 03:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.115 03:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.115 03:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.115 03:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:44.115 03:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.115 03:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.115 [ 00:11:44.115 { 00:11:44.115 "name": "BaseBdev4", 00:11:44.115 "aliases": [ 00:11:44.115 "bf89d8c3-964b-4bb9-8ca3-21613c9209a1" 00:11:44.115 ], 00:11:44.115 "product_name": "Malloc disk", 00:11:44.115 "block_size": 512, 00:11:44.115 "num_blocks": 65536, 00:11:44.115 "uuid": "bf89d8c3-964b-4bb9-8ca3-21613c9209a1", 00:11:44.115 "assigned_rate_limits": { 00:11:44.115 "rw_ios_per_sec": 0, 00:11:44.115 "rw_mbytes_per_sec": 0, 00:11:44.115 "r_mbytes_per_sec": 0, 00:11:44.115 "w_mbytes_per_sec": 0 00:11:44.115 }, 00:11:44.115 "claimed": true, 00:11:44.115 "claim_type": "exclusive_write", 00:11:44.115 "zoned": false, 00:11:44.115 "supported_io_types": { 00:11:44.115 "read": true, 00:11:44.115 "write": true, 00:11:44.115 "unmap": true, 00:11:44.115 "flush": true, 00:11:44.115 "reset": true, 00:11:44.115 "nvme_admin": false, 00:11:44.115 "nvme_io": false, 00:11:44.115 "nvme_io_md": false, 00:11:44.115 "write_zeroes": true, 00:11:44.115 "zcopy": true, 00:11:44.115 "get_zone_info": false, 00:11:44.115 "zone_management": false, 00:11:44.115 "zone_append": false, 00:11:44.115 "compare": false, 00:11:44.115 "compare_and_write": false, 00:11:44.115 "abort": true, 00:11:44.115 "seek_hole": false, 00:11:44.115 "seek_data": false, 00:11:44.115 "copy": true, 00:11:44.115 "nvme_iov_md": false 00:11:44.115 }, 00:11:44.115 "memory_domains": [ 00:11:44.115 { 00:11:44.115 "dma_device_id": "system", 00:11:44.115 "dma_device_type": 1 00:11:44.115 }, 00:11:44.115 { 00:11:44.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.115 "dma_device_type": 2 00:11:44.115 } 00:11:44.115 ], 00:11:44.115 "driver_specific": {} 00:11:44.115 } 00:11:44.115 ] 00:11:44.115 03:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.115 03:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:44.115 03:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:44.115 03:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:44.115 03:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:44.115 03:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:44.115 03:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:44.115 03:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:44.115 03:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:44.115 03:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:44.115 03:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:44.115 03:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:44.115 03:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:44.115 03:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:44.375 03:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:44.375 03:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.375 03:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.375 03:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:44.375 03:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.375 03:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:44.375 "name": "Existed_Raid", 00:11:44.375 "uuid": "0b3e127a-5b60-4b6c-9ca5-7fa9dbafe0c8", 00:11:44.375 "strip_size_kb": 64, 00:11:44.375 "state": "online", 00:11:44.375 "raid_level": "concat", 00:11:44.375 "superblock": false, 00:11:44.375 "num_base_bdevs": 4, 00:11:44.375 "num_base_bdevs_discovered": 4, 00:11:44.375 "num_base_bdevs_operational": 4, 00:11:44.375 "base_bdevs_list": [ 00:11:44.375 { 00:11:44.375 "name": "BaseBdev1", 00:11:44.375 "uuid": "7b4f80c2-c5ab-44a9-8c77-9bcaf2f9d591", 00:11:44.375 "is_configured": true, 00:11:44.375 "data_offset": 0, 00:11:44.375 "data_size": 65536 00:11:44.375 }, 00:11:44.375 { 00:11:44.375 "name": "BaseBdev2", 00:11:44.375 "uuid": "5b57d10f-500e-405d-bbec-96bb50a7dfa7", 00:11:44.375 "is_configured": true, 00:11:44.375 "data_offset": 0, 00:11:44.375 "data_size": 65536 00:11:44.375 }, 00:11:44.375 { 00:11:44.375 "name": "BaseBdev3", 00:11:44.375 "uuid": "e8fd5025-5d9d-4bfb-877c-6a7a21385eb5", 00:11:44.375 "is_configured": true, 00:11:44.375 "data_offset": 0, 00:11:44.375 "data_size": 65536 00:11:44.375 }, 00:11:44.375 { 00:11:44.375 "name": "BaseBdev4", 00:11:44.375 "uuid": "bf89d8c3-964b-4bb9-8ca3-21613c9209a1", 00:11:44.375 "is_configured": true, 00:11:44.375 "data_offset": 0, 00:11:44.375 "data_size": 65536 00:11:44.375 } 00:11:44.375 ] 00:11:44.375 }' 00:11:44.375 03:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:44.375 03:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.634 03:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:44.634 03:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:44.634 03:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:44.634 03:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:44.634 03:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:44.634 03:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:44.634 03:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:44.634 03:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.634 03:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.634 03:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:44.634 [2024-10-09 03:14:27.915894] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:44.634 03:14:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.895 03:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:44.895 "name": "Existed_Raid", 00:11:44.895 "aliases": [ 00:11:44.895 "0b3e127a-5b60-4b6c-9ca5-7fa9dbafe0c8" 00:11:44.895 ], 00:11:44.895 "product_name": "Raid Volume", 00:11:44.895 "block_size": 512, 00:11:44.895 "num_blocks": 262144, 00:11:44.895 "uuid": "0b3e127a-5b60-4b6c-9ca5-7fa9dbafe0c8", 00:11:44.895 "assigned_rate_limits": { 00:11:44.895 "rw_ios_per_sec": 0, 00:11:44.895 "rw_mbytes_per_sec": 0, 00:11:44.895 "r_mbytes_per_sec": 0, 00:11:44.895 "w_mbytes_per_sec": 0 00:11:44.895 }, 00:11:44.895 "claimed": false, 00:11:44.895 "zoned": false, 00:11:44.895 "supported_io_types": { 00:11:44.895 "read": true, 00:11:44.895 "write": true, 00:11:44.895 "unmap": true, 00:11:44.895 "flush": true, 00:11:44.895 "reset": true, 00:11:44.895 "nvme_admin": false, 00:11:44.895 "nvme_io": false, 00:11:44.895 "nvme_io_md": false, 00:11:44.895 "write_zeroes": true, 00:11:44.895 "zcopy": false, 00:11:44.895 "get_zone_info": false, 00:11:44.895 "zone_management": false, 00:11:44.895 "zone_append": false, 00:11:44.895 "compare": false, 00:11:44.895 "compare_and_write": false, 00:11:44.895 "abort": false, 00:11:44.895 "seek_hole": false, 00:11:44.895 "seek_data": false, 00:11:44.895 "copy": false, 00:11:44.895 "nvme_iov_md": false 00:11:44.895 }, 00:11:44.895 "memory_domains": [ 00:11:44.895 { 00:11:44.895 "dma_device_id": "system", 00:11:44.895 "dma_device_type": 1 00:11:44.895 }, 00:11:44.895 { 00:11:44.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.895 "dma_device_type": 2 00:11:44.895 }, 00:11:44.895 { 00:11:44.895 "dma_device_id": "system", 00:11:44.895 "dma_device_type": 1 00:11:44.895 }, 00:11:44.895 { 00:11:44.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.895 "dma_device_type": 2 00:11:44.895 }, 00:11:44.895 { 00:11:44.895 "dma_device_id": "system", 00:11:44.895 "dma_device_type": 1 00:11:44.895 }, 00:11:44.895 { 00:11:44.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.895 "dma_device_type": 2 00:11:44.895 }, 00:11:44.895 { 00:11:44.895 "dma_device_id": "system", 00:11:44.895 "dma_device_type": 1 00:11:44.895 }, 00:11:44.895 { 00:11:44.895 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.895 "dma_device_type": 2 00:11:44.895 } 00:11:44.895 ], 00:11:44.895 "driver_specific": { 00:11:44.895 "raid": { 00:11:44.895 "uuid": "0b3e127a-5b60-4b6c-9ca5-7fa9dbafe0c8", 00:11:44.895 "strip_size_kb": 64, 00:11:44.895 "state": "online", 00:11:44.895 "raid_level": "concat", 00:11:44.895 "superblock": false, 00:11:44.895 "num_base_bdevs": 4, 00:11:44.895 "num_base_bdevs_discovered": 4, 00:11:44.895 "num_base_bdevs_operational": 4, 00:11:44.895 "base_bdevs_list": [ 00:11:44.895 { 00:11:44.895 "name": "BaseBdev1", 00:11:44.895 "uuid": "7b4f80c2-c5ab-44a9-8c77-9bcaf2f9d591", 00:11:44.895 "is_configured": true, 00:11:44.895 "data_offset": 0, 00:11:44.895 "data_size": 65536 00:11:44.895 }, 00:11:44.895 { 00:11:44.895 "name": "BaseBdev2", 00:11:44.895 "uuid": "5b57d10f-500e-405d-bbec-96bb50a7dfa7", 00:11:44.895 "is_configured": true, 00:11:44.895 "data_offset": 0, 00:11:44.895 "data_size": 65536 00:11:44.895 }, 00:11:44.895 { 00:11:44.895 "name": "BaseBdev3", 00:11:44.895 "uuid": "e8fd5025-5d9d-4bfb-877c-6a7a21385eb5", 00:11:44.895 "is_configured": true, 00:11:44.895 "data_offset": 0, 00:11:44.895 "data_size": 65536 00:11:44.895 }, 00:11:44.895 { 00:11:44.895 "name": "BaseBdev4", 00:11:44.895 "uuid": "bf89d8c3-964b-4bb9-8ca3-21613c9209a1", 00:11:44.895 "is_configured": true, 00:11:44.895 "data_offset": 0, 00:11:44.895 "data_size": 65536 00:11:44.895 } 00:11:44.895 ] 00:11:44.895 } 00:11:44.895 } 00:11:44.895 }' 00:11:44.895 03:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:44.895 03:14:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:44.895 BaseBdev2 00:11:44.895 BaseBdev3 00:11:44.895 BaseBdev4' 00:11:44.895 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.895 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:44.895 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.895 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:44.895 03:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.895 03:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.895 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.895 03:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.895 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.895 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.895 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.895 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.895 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:44.895 03:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.895 03:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.895 03:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.895 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.895 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.895 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.895 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:44.895 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.895 03:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.895 03:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:44.895 03:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.895 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:44.895 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:44.895 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:44.895 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:44.895 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:44.895 03:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.895 03:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.154 03:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.154 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:45.154 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:45.154 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:45.154 03:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.154 03:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.154 [2024-10-09 03:14:28.231089] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:45.154 [2024-10-09 03:14:28.231150] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:45.154 [2024-10-09 03:14:28.231234] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:45.154 03:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.154 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:45.154 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:45.154 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:45.154 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:45.154 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:45.154 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:45.154 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:45.154 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:45.154 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:45.155 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:45.155 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:45.155 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:45.155 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:45.155 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:45.155 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:45.155 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.155 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.155 03:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.155 03:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.155 03:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.155 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:45.155 "name": "Existed_Raid", 00:11:45.155 "uuid": "0b3e127a-5b60-4b6c-9ca5-7fa9dbafe0c8", 00:11:45.155 "strip_size_kb": 64, 00:11:45.155 "state": "offline", 00:11:45.155 "raid_level": "concat", 00:11:45.155 "superblock": false, 00:11:45.155 "num_base_bdevs": 4, 00:11:45.155 "num_base_bdevs_discovered": 3, 00:11:45.155 "num_base_bdevs_operational": 3, 00:11:45.155 "base_bdevs_list": [ 00:11:45.155 { 00:11:45.155 "name": null, 00:11:45.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.155 "is_configured": false, 00:11:45.155 "data_offset": 0, 00:11:45.155 "data_size": 65536 00:11:45.155 }, 00:11:45.155 { 00:11:45.155 "name": "BaseBdev2", 00:11:45.155 "uuid": "5b57d10f-500e-405d-bbec-96bb50a7dfa7", 00:11:45.155 "is_configured": true, 00:11:45.155 "data_offset": 0, 00:11:45.155 "data_size": 65536 00:11:45.155 }, 00:11:45.155 { 00:11:45.155 "name": "BaseBdev3", 00:11:45.155 "uuid": "e8fd5025-5d9d-4bfb-877c-6a7a21385eb5", 00:11:45.155 "is_configured": true, 00:11:45.155 "data_offset": 0, 00:11:45.155 "data_size": 65536 00:11:45.155 }, 00:11:45.155 { 00:11:45.155 "name": "BaseBdev4", 00:11:45.155 "uuid": "bf89d8c3-964b-4bb9-8ca3-21613c9209a1", 00:11:45.155 "is_configured": true, 00:11:45.155 "data_offset": 0, 00:11:45.155 "data_size": 65536 00:11:45.155 } 00:11:45.155 ] 00:11:45.155 }' 00:11:45.155 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:45.155 03:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.723 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:45.723 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:45.723 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:45.723 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.723 03:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.723 03:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.723 03:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.723 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:45.723 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:45.723 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:45.723 03:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.723 03:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.723 [2024-10-09 03:14:28.853945] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:45.723 03:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.723 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:45.723 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:45.723 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.723 03:14:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:45.723 03:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.723 03:14:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.723 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.988 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:45.988 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:45.988 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:45.988 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.988 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.988 [2024-10-09 03:14:29.039779] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:45.988 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.988 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:45.988 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:45.988 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:45.988 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:45.988 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.988 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.988 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.988 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:45.988 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:45.988 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:45.988 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.988 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.988 [2024-10-09 03:14:29.221889] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:45.988 [2024-10-09 03:14:29.222057] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:46.249 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.249 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:46.249 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:46.249 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.249 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.249 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:46.249 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.249 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.249 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:46.249 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:46.249 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:46.249 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:46.249 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:46.249 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:46.249 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.249 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.249 BaseBdev2 00:11:46.249 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.249 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:46.249 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:46.249 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:46.249 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:46.249 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:46.249 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:46.249 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:46.249 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.249 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.249 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.249 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:46.249 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.249 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.249 [ 00:11:46.249 { 00:11:46.249 "name": "BaseBdev2", 00:11:46.249 "aliases": [ 00:11:46.249 "b853198f-5c60-4c17-a995-7d2d95b25616" 00:11:46.249 ], 00:11:46.249 "product_name": "Malloc disk", 00:11:46.249 "block_size": 512, 00:11:46.249 "num_blocks": 65536, 00:11:46.249 "uuid": "b853198f-5c60-4c17-a995-7d2d95b25616", 00:11:46.249 "assigned_rate_limits": { 00:11:46.249 "rw_ios_per_sec": 0, 00:11:46.249 "rw_mbytes_per_sec": 0, 00:11:46.249 "r_mbytes_per_sec": 0, 00:11:46.249 "w_mbytes_per_sec": 0 00:11:46.249 }, 00:11:46.249 "claimed": false, 00:11:46.249 "zoned": false, 00:11:46.249 "supported_io_types": { 00:11:46.249 "read": true, 00:11:46.249 "write": true, 00:11:46.249 "unmap": true, 00:11:46.249 "flush": true, 00:11:46.249 "reset": true, 00:11:46.249 "nvme_admin": false, 00:11:46.249 "nvme_io": false, 00:11:46.249 "nvme_io_md": false, 00:11:46.249 "write_zeroes": true, 00:11:46.249 "zcopy": true, 00:11:46.249 "get_zone_info": false, 00:11:46.250 "zone_management": false, 00:11:46.250 "zone_append": false, 00:11:46.250 "compare": false, 00:11:46.250 "compare_and_write": false, 00:11:46.250 "abort": true, 00:11:46.250 "seek_hole": false, 00:11:46.250 "seek_data": false, 00:11:46.250 "copy": true, 00:11:46.250 "nvme_iov_md": false 00:11:46.250 }, 00:11:46.250 "memory_domains": [ 00:11:46.250 { 00:11:46.250 "dma_device_id": "system", 00:11:46.250 "dma_device_type": 1 00:11:46.250 }, 00:11:46.250 { 00:11:46.250 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.250 "dma_device_type": 2 00:11:46.250 } 00:11:46.250 ], 00:11:46.250 "driver_specific": {} 00:11:46.250 } 00:11:46.250 ] 00:11:46.250 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.250 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:46.250 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:46.250 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:46.250 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:46.250 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.250 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.250 BaseBdev3 00:11:46.509 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.509 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:46.509 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:46.509 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:46.509 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:46.509 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:46.509 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:46.509 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:46.509 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.509 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.509 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.509 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:46.509 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.509 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.509 [ 00:11:46.509 { 00:11:46.509 "name": "BaseBdev3", 00:11:46.509 "aliases": [ 00:11:46.509 "746cde11-832e-4a80-bf0a-725281f646b9" 00:11:46.509 ], 00:11:46.509 "product_name": "Malloc disk", 00:11:46.509 "block_size": 512, 00:11:46.509 "num_blocks": 65536, 00:11:46.509 "uuid": "746cde11-832e-4a80-bf0a-725281f646b9", 00:11:46.509 "assigned_rate_limits": { 00:11:46.509 "rw_ios_per_sec": 0, 00:11:46.509 "rw_mbytes_per_sec": 0, 00:11:46.509 "r_mbytes_per_sec": 0, 00:11:46.509 "w_mbytes_per_sec": 0 00:11:46.509 }, 00:11:46.509 "claimed": false, 00:11:46.509 "zoned": false, 00:11:46.509 "supported_io_types": { 00:11:46.509 "read": true, 00:11:46.509 "write": true, 00:11:46.509 "unmap": true, 00:11:46.509 "flush": true, 00:11:46.509 "reset": true, 00:11:46.509 "nvme_admin": false, 00:11:46.509 "nvme_io": false, 00:11:46.509 "nvme_io_md": false, 00:11:46.509 "write_zeroes": true, 00:11:46.509 "zcopy": true, 00:11:46.509 "get_zone_info": false, 00:11:46.509 "zone_management": false, 00:11:46.509 "zone_append": false, 00:11:46.509 "compare": false, 00:11:46.509 "compare_and_write": false, 00:11:46.509 "abort": true, 00:11:46.509 "seek_hole": false, 00:11:46.509 "seek_data": false, 00:11:46.509 "copy": true, 00:11:46.509 "nvme_iov_md": false 00:11:46.509 }, 00:11:46.509 "memory_domains": [ 00:11:46.509 { 00:11:46.509 "dma_device_id": "system", 00:11:46.509 "dma_device_type": 1 00:11:46.509 }, 00:11:46.509 { 00:11:46.509 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.509 "dma_device_type": 2 00:11:46.509 } 00:11:46.509 ], 00:11:46.509 "driver_specific": {} 00:11:46.509 } 00:11:46.509 ] 00:11:46.509 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.509 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:46.509 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:46.509 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.510 BaseBdev4 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.510 [ 00:11:46.510 { 00:11:46.510 "name": "BaseBdev4", 00:11:46.510 "aliases": [ 00:11:46.510 "3b8df76e-8946-416f-aaf8-661404a8f8dd" 00:11:46.510 ], 00:11:46.510 "product_name": "Malloc disk", 00:11:46.510 "block_size": 512, 00:11:46.510 "num_blocks": 65536, 00:11:46.510 "uuid": "3b8df76e-8946-416f-aaf8-661404a8f8dd", 00:11:46.510 "assigned_rate_limits": { 00:11:46.510 "rw_ios_per_sec": 0, 00:11:46.510 "rw_mbytes_per_sec": 0, 00:11:46.510 "r_mbytes_per_sec": 0, 00:11:46.510 "w_mbytes_per_sec": 0 00:11:46.510 }, 00:11:46.510 "claimed": false, 00:11:46.510 "zoned": false, 00:11:46.510 "supported_io_types": { 00:11:46.510 "read": true, 00:11:46.510 "write": true, 00:11:46.510 "unmap": true, 00:11:46.510 "flush": true, 00:11:46.510 "reset": true, 00:11:46.510 "nvme_admin": false, 00:11:46.510 "nvme_io": false, 00:11:46.510 "nvme_io_md": false, 00:11:46.510 "write_zeroes": true, 00:11:46.510 "zcopy": true, 00:11:46.510 "get_zone_info": false, 00:11:46.510 "zone_management": false, 00:11:46.510 "zone_append": false, 00:11:46.510 "compare": false, 00:11:46.510 "compare_and_write": false, 00:11:46.510 "abort": true, 00:11:46.510 "seek_hole": false, 00:11:46.510 "seek_data": false, 00:11:46.510 "copy": true, 00:11:46.510 "nvme_iov_md": false 00:11:46.510 }, 00:11:46.510 "memory_domains": [ 00:11:46.510 { 00:11:46.510 "dma_device_id": "system", 00:11:46.510 "dma_device_type": 1 00:11:46.510 }, 00:11:46.510 { 00:11:46.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:46.510 "dma_device_type": 2 00:11:46.510 } 00:11:46.510 ], 00:11:46.510 "driver_specific": {} 00:11:46.510 } 00:11:46.510 ] 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.510 [2024-10-09 03:14:29.689786] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:46.510 [2024-10-09 03:14:29.689961] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:46.510 [2024-10-09 03:14:29.690026] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:46.510 [2024-10-09 03:14:29.692604] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:46.510 [2024-10-09 03:14:29.692726] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.510 "name": "Existed_Raid", 00:11:46.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.510 "strip_size_kb": 64, 00:11:46.510 "state": "configuring", 00:11:46.510 "raid_level": "concat", 00:11:46.510 "superblock": false, 00:11:46.510 "num_base_bdevs": 4, 00:11:46.510 "num_base_bdevs_discovered": 3, 00:11:46.510 "num_base_bdevs_operational": 4, 00:11:46.510 "base_bdevs_list": [ 00:11:46.510 { 00:11:46.510 "name": "BaseBdev1", 00:11:46.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.510 "is_configured": false, 00:11:46.510 "data_offset": 0, 00:11:46.510 "data_size": 0 00:11:46.510 }, 00:11:46.510 { 00:11:46.510 "name": "BaseBdev2", 00:11:46.510 "uuid": "b853198f-5c60-4c17-a995-7d2d95b25616", 00:11:46.510 "is_configured": true, 00:11:46.510 "data_offset": 0, 00:11:46.510 "data_size": 65536 00:11:46.510 }, 00:11:46.510 { 00:11:46.510 "name": "BaseBdev3", 00:11:46.510 "uuid": "746cde11-832e-4a80-bf0a-725281f646b9", 00:11:46.510 "is_configured": true, 00:11:46.510 "data_offset": 0, 00:11:46.510 "data_size": 65536 00:11:46.510 }, 00:11:46.510 { 00:11:46.510 "name": "BaseBdev4", 00:11:46.510 "uuid": "3b8df76e-8946-416f-aaf8-661404a8f8dd", 00:11:46.510 "is_configured": true, 00:11:46.510 "data_offset": 0, 00:11:46.510 "data_size": 65536 00:11:46.510 } 00:11:46.510 ] 00:11:46.510 }' 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.510 03:14:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.079 03:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:47.079 03:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.079 03:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.079 [2024-10-09 03:14:30.121111] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:47.079 03:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.079 03:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:47.079 03:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.079 03:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.079 03:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:47.079 03:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.079 03:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.079 03:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.079 03:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.079 03:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.079 03:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.079 03:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.079 03:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.079 03:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.079 03:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.079 03:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.079 03:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.079 "name": "Existed_Raid", 00:11:47.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.079 "strip_size_kb": 64, 00:11:47.079 "state": "configuring", 00:11:47.079 "raid_level": "concat", 00:11:47.079 "superblock": false, 00:11:47.079 "num_base_bdevs": 4, 00:11:47.079 "num_base_bdevs_discovered": 2, 00:11:47.079 "num_base_bdevs_operational": 4, 00:11:47.079 "base_bdevs_list": [ 00:11:47.079 { 00:11:47.079 "name": "BaseBdev1", 00:11:47.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.079 "is_configured": false, 00:11:47.079 "data_offset": 0, 00:11:47.079 "data_size": 0 00:11:47.079 }, 00:11:47.079 { 00:11:47.079 "name": null, 00:11:47.079 "uuid": "b853198f-5c60-4c17-a995-7d2d95b25616", 00:11:47.079 "is_configured": false, 00:11:47.079 "data_offset": 0, 00:11:47.079 "data_size": 65536 00:11:47.079 }, 00:11:47.079 { 00:11:47.079 "name": "BaseBdev3", 00:11:47.079 "uuid": "746cde11-832e-4a80-bf0a-725281f646b9", 00:11:47.079 "is_configured": true, 00:11:47.079 "data_offset": 0, 00:11:47.080 "data_size": 65536 00:11:47.080 }, 00:11:47.080 { 00:11:47.080 "name": "BaseBdev4", 00:11:47.080 "uuid": "3b8df76e-8946-416f-aaf8-661404a8f8dd", 00:11:47.080 "is_configured": true, 00:11:47.080 "data_offset": 0, 00:11:47.080 "data_size": 65536 00:11:47.080 } 00:11:47.080 ] 00:11:47.080 }' 00:11:47.080 03:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.080 03:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.339 03:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.339 03:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:47.339 03:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.339 03:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.339 03:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.339 03:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:47.339 03:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:47.339 03:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.339 03:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.598 [2024-10-09 03:14:30.658048] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:47.598 BaseBdev1 00:11:47.598 03:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.598 03:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:47.598 03:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:47.598 03:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:47.598 03:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:47.598 03:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:47.598 03:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:47.598 03:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:47.598 03:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.598 03:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.598 03:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.598 03:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:47.598 03:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.598 03:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.598 [ 00:11:47.598 { 00:11:47.598 "name": "BaseBdev1", 00:11:47.598 "aliases": [ 00:11:47.598 "18b28fc3-878e-4831-944e-c33de985871f" 00:11:47.598 ], 00:11:47.598 "product_name": "Malloc disk", 00:11:47.598 "block_size": 512, 00:11:47.599 "num_blocks": 65536, 00:11:47.599 "uuid": "18b28fc3-878e-4831-944e-c33de985871f", 00:11:47.599 "assigned_rate_limits": { 00:11:47.599 "rw_ios_per_sec": 0, 00:11:47.599 "rw_mbytes_per_sec": 0, 00:11:47.599 "r_mbytes_per_sec": 0, 00:11:47.599 "w_mbytes_per_sec": 0 00:11:47.599 }, 00:11:47.599 "claimed": true, 00:11:47.599 "claim_type": "exclusive_write", 00:11:47.599 "zoned": false, 00:11:47.599 "supported_io_types": { 00:11:47.599 "read": true, 00:11:47.599 "write": true, 00:11:47.599 "unmap": true, 00:11:47.599 "flush": true, 00:11:47.599 "reset": true, 00:11:47.599 "nvme_admin": false, 00:11:47.599 "nvme_io": false, 00:11:47.599 "nvme_io_md": false, 00:11:47.599 "write_zeroes": true, 00:11:47.599 "zcopy": true, 00:11:47.599 "get_zone_info": false, 00:11:47.599 "zone_management": false, 00:11:47.599 "zone_append": false, 00:11:47.599 "compare": false, 00:11:47.599 "compare_and_write": false, 00:11:47.599 "abort": true, 00:11:47.599 "seek_hole": false, 00:11:47.599 "seek_data": false, 00:11:47.599 "copy": true, 00:11:47.599 "nvme_iov_md": false 00:11:47.599 }, 00:11:47.599 "memory_domains": [ 00:11:47.599 { 00:11:47.599 "dma_device_id": "system", 00:11:47.599 "dma_device_type": 1 00:11:47.599 }, 00:11:47.599 { 00:11:47.599 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.599 "dma_device_type": 2 00:11:47.599 } 00:11:47.599 ], 00:11:47.599 "driver_specific": {} 00:11:47.599 } 00:11:47.599 ] 00:11:47.599 03:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.599 03:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:47.599 03:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:47.599 03:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:47.599 03:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:47.599 03:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:47.599 03:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:47.599 03:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:47.599 03:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:47.599 03:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:47.599 03:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:47.599 03:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:47.599 03:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:47.599 03:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.599 03:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:47.599 03:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.599 03:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:47.599 03:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:47.599 "name": "Existed_Raid", 00:11:47.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.599 "strip_size_kb": 64, 00:11:47.599 "state": "configuring", 00:11:47.599 "raid_level": "concat", 00:11:47.599 "superblock": false, 00:11:47.599 "num_base_bdevs": 4, 00:11:47.599 "num_base_bdevs_discovered": 3, 00:11:47.599 "num_base_bdevs_operational": 4, 00:11:47.599 "base_bdevs_list": [ 00:11:47.599 { 00:11:47.599 "name": "BaseBdev1", 00:11:47.599 "uuid": "18b28fc3-878e-4831-944e-c33de985871f", 00:11:47.599 "is_configured": true, 00:11:47.599 "data_offset": 0, 00:11:47.599 "data_size": 65536 00:11:47.599 }, 00:11:47.599 { 00:11:47.599 "name": null, 00:11:47.599 "uuid": "b853198f-5c60-4c17-a995-7d2d95b25616", 00:11:47.599 "is_configured": false, 00:11:47.599 "data_offset": 0, 00:11:47.599 "data_size": 65536 00:11:47.599 }, 00:11:47.599 { 00:11:47.599 "name": "BaseBdev3", 00:11:47.599 "uuid": "746cde11-832e-4a80-bf0a-725281f646b9", 00:11:47.599 "is_configured": true, 00:11:47.599 "data_offset": 0, 00:11:47.599 "data_size": 65536 00:11:47.599 }, 00:11:47.599 { 00:11:47.599 "name": "BaseBdev4", 00:11:47.599 "uuid": "3b8df76e-8946-416f-aaf8-661404a8f8dd", 00:11:47.599 "is_configured": true, 00:11:47.599 "data_offset": 0, 00:11:47.599 "data_size": 65536 00:11:47.599 } 00:11:47.599 ] 00:11:47.599 }' 00:11:47.599 03:14:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:47.599 03:14:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.168 03:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.168 03:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.168 03:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.168 03:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:48.168 03:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.168 03:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:48.168 03:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:48.168 03:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.168 03:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.168 [2024-10-09 03:14:31.253213] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:48.168 03:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.168 03:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:48.168 03:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.168 03:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.168 03:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:48.168 03:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:48.168 03:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.168 03:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.168 03:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.168 03:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.168 03:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.168 03:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.168 03:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.168 03:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.168 03:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.168 03:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.168 03:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.168 "name": "Existed_Raid", 00:11:48.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.168 "strip_size_kb": 64, 00:11:48.168 "state": "configuring", 00:11:48.168 "raid_level": "concat", 00:11:48.168 "superblock": false, 00:11:48.168 "num_base_bdevs": 4, 00:11:48.168 "num_base_bdevs_discovered": 2, 00:11:48.168 "num_base_bdevs_operational": 4, 00:11:48.168 "base_bdevs_list": [ 00:11:48.168 { 00:11:48.168 "name": "BaseBdev1", 00:11:48.168 "uuid": "18b28fc3-878e-4831-944e-c33de985871f", 00:11:48.168 "is_configured": true, 00:11:48.168 "data_offset": 0, 00:11:48.168 "data_size": 65536 00:11:48.168 }, 00:11:48.168 { 00:11:48.168 "name": null, 00:11:48.168 "uuid": "b853198f-5c60-4c17-a995-7d2d95b25616", 00:11:48.168 "is_configured": false, 00:11:48.168 "data_offset": 0, 00:11:48.168 "data_size": 65536 00:11:48.168 }, 00:11:48.168 { 00:11:48.168 "name": null, 00:11:48.168 "uuid": "746cde11-832e-4a80-bf0a-725281f646b9", 00:11:48.168 "is_configured": false, 00:11:48.168 "data_offset": 0, 00:11:48.168 "data_size": 65536 00:11:48.168 }, 00:11:48.168 { 00:11:48.168 "name": "BaseBdev4", 00:11:48.168 "uuid": "3b8df76e-8946-416f-aaf8-661404a8f8dd", 00:11:48.168 "is_configured": true, 00:11:48.168 "data_offset": 0, 00:11:48.169 "data_size": 65536 00:11:48.169 } 00:11:48.169 ] 00:11:48.169 }' 00:11:48.169 03:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.169 03:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.737 03:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.737 03:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:48.737 03:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.737 03:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.737 03:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.737 03:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:48.737 03:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:48.737 03:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.737 03:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.737 [2024-10-09 03:14:31.828429] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:48.737 03:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.737 03:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:48.737 03:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:48.737 03:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:48.737 03:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:48.737 03:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:48.737 03:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:48.737 03:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.737 03:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.737 03:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.737 03:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.737 03:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.737 03:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.737 03:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.737 03:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.737 03:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:48.737 03:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.737 "name": "Existed_Raid", 00:11:48.737 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.737 "strip_size_kb": 64, 00:11:48.737 "state": "configuring", 00:11:48.737 "raid_level": "concat", 00:11:48.737 "superblock": false, 00:11:48.737 "num_base_bdevs": 4, 00:11:48.737 "num_base_bdevs_discovered": 3, 00:11:48.737 "num_base_bdevs_operational": 4, 00:11:48.737 "base_bdevs_list": [ 00:11:48.737 { 00:11:48.737 "name": "BaseBdev1", 00:11:48.737 "uuid": "18b28fc3-878e-4831-944e-c33de985871f", 00:11:48.737 "is_configured": true, 00:11:48.737 "data_offset": 0, 00:11:48.737 "data_size": 65536 00:11:48.737 }, 00:11:48.737 { 00:11:48.737 "name": null, 00:11:48.737 "uuid": "b853198f-5c60-4c17-a995-7d2d95b25616", 00:11:48.737 "is_configured": false, 00:11:48.737 "data_offset": 0, 00:11:48.737 "data_size": 65536 00:11:48.738 }, 00:11:48.738 { 00:11:48.738 "name": "BaseBdev3", 00:11:48.738 "uuid": "746cde11-832e-4a80-bf0a-725281f646b9", 00:11:48.738 "is_configured": true, 00:11:48.738 "data_offset": 0, 00:11:48.738 "data_size": 65536 00:11:48.738 }, 00:11:48.738 { 00:11:48.738 "name": "BaseBdev4", 00:11:48.738 "uuid": "3b8df76e-8946-416f-aaf8-661404a8f8dd", 00:11:48.738 "is_configured": true, 00:11:48.738 "data_offset": 0, 00:11:48.738 "data_size": 65536 00:11:48.738 } 00:11:48.738 ] 00:11:48.738 }' 00:11:48.738 03:14:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.738 03:14:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.997 03:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:48.997 03:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.997 03:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:48.997 03:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.997 03:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.257 03:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:49.257 03:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:49.257 03:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.257 03:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.257 [2024-10-09 03:14:32.311628] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:49.257 03:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.257 03:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:49.257 03:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.257 03:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.257 03:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:49.257 03:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:49.257 03:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.257 03:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.257 03:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.257 03:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.257 03:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.257 03:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.257 03:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.257 03:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.257 03:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.257 03:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.257 03:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.257 "name": "Existed_Raid", 00:11:49.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.257 "strip_size_kb": 64, 00:11:49.257 "state": "configuring", 00:11:49.257 "raid_level": "concat", 00:11:49.257 "superblock": false, 00:11:49.257 "num_base_bdevs": 4, 00:11:49.257 "num_base_bdevs_discovered": 2, 00:11:49.257 "num_base_bdevs_operational": 4, 00:11:49.257 "base_bdevs_list": [ 00:11:49.257 { 00:11:49.257 "name": null, 00:11:49.257 "uuid": "18b28fc3-878e-4831-944e-c33de985871f", 00:11:49.257 "is_configured": false, 00:11:49.257 "data_offset": 0, 00:11:49.257 "data_size": 65536 00:11:49.257 }, 00:11:49.257 { 00:11:49.257 "name": null, 00:11:49.257 "uuid": "b853198f-5c60-4c17-a995-7d2d95b25616", 00:11:49.257 "is_configured": false, 00:11:49.257 "data_offset": 0, 00:11:49.257 "data_size": 65536 00:11:49.257 }, 00:11:49.257 { 00:11:49.257 "name": "BaseBdev3", 00:11:49.257 "uuid": "746cde11-832e-4a80-bf0a-725281f646b9", 00:11:49.257 "is_configured": true, 00:11:49.257 "data_offset": 0, 00:11:49.257 "data_size": 65536 00:11:49.257 }, 00:11:49.257 { 00:11:49.257 "name": "BaseBdev4", 00:11:49.257 "uuid": "3b8df76e-8946-416f-aaf8-661404a8f8dd", 00:11:49.257 "is_configured": true, 00:11:49.257 "data_offset": 0, 00:11:49.257 "data_size": 65536 00:11:49.257 } 00:11:49.257 ] 00:11:49.257 }' 00:11:49.257 03:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.257 03:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.826 03:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.826 03:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.826 03:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.826 03:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:49.826 03:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.827 03:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:49.827 03:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:49.827 03:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.827 03:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.827 [2024-10-09 03:14:32.981327] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:49.827 03:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.827 03:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:49.827 03:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:49.827 03:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:49.827 03:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:49.827 03:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:49.827 03:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:49.827 03:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:49.827 03:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:49.827 03:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:49.827 03:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:49.827 03:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.827 03:14:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:49.827 03:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:49.827 03:14:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.827 03:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:49.827 03:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:49.827 "name": "Existed_Raid", 00:11:49.827 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.827 "strip_size_kb": 64, 00:11:49.827 "state": "configuring", 00:11:49.827 "raid_level": "concat", 00:11:49.827 "superblock": false, 00:11:49.827 "num_base_bdevs": 4, 00:11:49.827 "num_base_bdevs_discovered": 3, 00:11:49.827 "num_base_bdevs_operational": 4, 00:11:49.827 "base_bdevs_list": [ 00:11:49.827 { 00:11:49.827 "name": null, 00:11:49.827 "uuid": "18b28fc3-878e-4831-944e-c33de985871f", 00:11:49.827 "is_configured": false, 00:11:49.827 "data_offset": 0, 00:11:49.827 "data_size": 65536 00:11:49.827 }, 00:11:49.827 { 00:11:49.827 "name": "BaseBdev2", 00:11:49.827 "uuid": "b853198f-5c60-4c17-a995-7d2d95b25616", 00:11:49.827 "is_configured": true, 00:11:49.827 "data_offset": 0, 00:11:49.827 "data_size": 65536 00:11:49.827 }, 00:11:49.827 { 00:11:49.827 "name": "BaseBdev3", 00:11:49.827 "uuid": "746cde11-832e-4a80-bf0a-725281f646b9", 00:11:49.827 "is_configured": true, 00:11:49.827 "data_offset": 0, 00:11:49.827 "data_size": 65536 00:11:49.827 }, 00:11:49.827 { 00:11:49.827 "name": "BaseBdev4", 00:11:49.827 "uuid": "3b8df76e-8946-416f-aaf8-661404a8f8dd", 00:11:49.827 "is_configured": true, 00:11:49.827 "data_offset": 0, 00:11:49.827 "data_size": 65536 00:11:49.827 } 00:11:49.827 ] 00:11:49.827 }' 00:11:49.827 03:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:49.827 03:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.394 03:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.394 03:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.394 03:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.394 03:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:50.394 03:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.394 03:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:50.394 03:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.394 03:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.394 03:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.394 03:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:50.394 03:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.394 03:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 18b28fc3-878e-4831-944e-c33de985871f 00:11:50.394 03:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.394 03:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.394 [2024-10-09 03:14:33.561971] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:50.394 [2024-10-09 03:14:33.562137] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:50.394 [2024-10-09 03:14:33.562168] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:50.394 [2024-10-09 03:14:33.562552] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:11:50.394 [2024-10-09 03:14:33.562800] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:50.394 [2024-10-09 03:14:33.562872] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:50.394 [2024-10-09 03:14:33.563235] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:50.394 NewBaseBdev 00:11:50.394 03:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.394 03:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:50.394 03:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:11:50.394 03:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:50.395 03:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:11:50.395 03:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:50.395 03:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:50.395 03:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:50.395 03:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.395 03:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.395 03:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.395 03:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:50.395 03:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.395 03:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.395 [ 00:11:50.395 { 00:11:50.395 "name": "NewBaseBdev", 00:11:50.395 "aliases": [ 00:11:50.395 "18b28fc3-878e-4831-944e-c33de985871f" 00:11:50.395 ], 00:11:50.395 "product_name": "Malloc disk", 00:11:50.395 "block_size": 512, 00:11:50.395 "num_blocks": 65536, 00:11:50.395 "uuid": "18b28fc3-878e-4831-944e-c33de985871f", 00:11:50.395 "assigned_rate_limits": { 00:11:50.395 "rw_ios_per_sec": 0, 00:11:50.395 "rw_mbytes_per_sec": 0, 00:11:50.395 "r_mbytes_per_sec": 0, 00:11:50.395 "w_mbytes_per_sec": 0 00:11:50.395 }, 00:11:50.395 "claimed": true, 00:11:50.395 "claim_type": "exclusive_write", 00:11:50.395 "zoned": false, 00:11:50.395 "supported_io_types": { 00:11:50.395 "read": true, 00:11:50.395 "write": true, 00:11:50.395 "unmap": true, 00:11:50.395 "flush": true, 00:11:50.395 "reset": true, 00:11:50.395 "nvme_admin": false, 00:11:50.395 "nvme_io": false, 00:11:50.395 "nvme_io_md": false, 00:11:50.395 "write_zeroes": true, 00:11:50.395 "zcopy": true, 00:11:50.395 "get_zone_info": false, 00:11:50.395 "zone_management": false, 00:11:50.395 "zone_append": false, 00:11:50.395 "compare": false, 00:11:50.395 "compare_and_write": false, 00:11:50.395 "abort": true, 00:11:50.395 "seek_hole": false, 00:11:50.395 "seek_data": false, 00:11:50.395 "copy": true, 00:11:50.395 "nvme_iov_md": false 00:11:50.395 }, 00:11:50.395 "memory_domains": [ 00:11:50.395 { 00:11:50.395 "dma_device_id": "system", 00:11:50.395 "dma_device_type": 1 00:11:50.395 }, 00:11:50.395 { 00:11:50.395 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.395 "dma_device_type": 2 00:11:50.395 } 00:11:50.395 ], 00:11:50.395 "driver_specific": {} 00:11:50.395 } 00:11:50.395 ] 00:11:50.395 03:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.395 03:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:11:50.395 03:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:50.395 03:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:50.395 03:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:50.395 03:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:50.395 03:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:50.395 03:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:50.395 03:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:50.395 03:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:50.395 03:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:50.395 03:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:50.395 03:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:50.395 03:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.395 03:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.395 03:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.395 03:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.395 03:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:50.395 "name": "Existed_Raid", 00:11:50.395 "uuid": "f4def10c-b15b-4151-ae36-9cc8836df60f", 00:11:50.395 "strip_size_kb": 64, 00:11:50.395 "state": "online", 00:11:50.395 "raid_level": "concat", 00:11:50.395 "superblock": false, 00:11:50.395 "num_base_bdevs": 4, 00:11:50.395 "num_base_bdevs_discovered": 4, 00:11:50.395 "num_base_bdevs_operational": 4, 00:11:50.395 "base_bdevs_list": [ 00:11:50.395 { 00:11:50.395 "name": "NewBaseBdev", 00:11:50.395 "uuid": "18b28fc3-878e-4831-944e-c33de985871f", 00:11:50.395 "is_configured": true, 00:11:50.395 "data_offset": 0, 00:11:50.395 "data_size": 65536 00:11:50.395 }, 00:11:50.395 { 00:11:50.395 "name": "BaseBdev2", 00:11:50.395 "uuid": "b853198f-5c60-4c17-a995-7d2d95b25616", 00:11:50.395 "is_configured": true, 00:11:50.395 "data_offset": 0, 00:11:50.395 "data_size": 65536 00:11:50.395 }, 00:11:50.395 { 00:11:50.395 "name": "BaseBdev3", 00:11:50.395 "uuid": "746cde11-832e-4a80-bf0a-725281f646b9", 00:11:50.395 "is_configured": true, 00:11:50.395 "data_offset": 0, 00:11:50.395 "data_size": 65536 00:11:50.395 }, 00:11:50.395 { 00:11:50.395 "name": "BaseBdev4", 00:11:50.395 "uuid": "3b8df76e-8946-416f-aaf8-661404a8f8dd", 00:11:50.395 "is_configured": true, 00:11:50.395 "data_offset": 0, 00:11:50.395 "data_size": 65536 00:11:50.395 } 00:11:50.395 ] 00:11:50.395 }' 00:11:50.395 03:14:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:50.395 03:14:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.964 03:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:50.964 03:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:50.964 03:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:50.964 03:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:50.964 03:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:50.964 03:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:50.964 03:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:50.964 03:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.964 03:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.964 03:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:50.964 [2024-10-09 03:14:34.069635] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:50.964 03:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.964 03:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:50.964 "name": "Existed_Raid", 00:11:50.964 "aliases": [ 00:11:50.964 "f4def10c-b15b-4151-ae36-9cc8836df60f" 00:11:50.964 ], 00:11:50.964 "product_name": "Raid Volume", 00:11:50.964 "block_size": 512, 00:11:50.964 "num_blocks": 262144, 00:11:50.964 "uuid": "f4def10c-b15b-4151-ae36-9cc8836df60f", 00:11:50.964 "assigned_rate_limits": { 00:11:50.964 "rw_ios_per_sec": 0, 00:11:50.964 "rw_mbytes_per_sec": 0, 00:11:50.964 "r_mbytes_per_sec": 0, 00:11:50.964 "w_mbytes_per_sec": 0 00:11:50.965 }, 00:11:50.965 "claimed": false, 00:11:50.965 "zoned": false, 00:11:50.965 "supported_io_types": { 00:11:50.965 "read": true, 00:11:50.965 "write": true, 00:11:50.965 "unmap": true, 00:11:50.965 "flush": true, 00:11:50.965 "reset": true, 00:11:50.965 "nvme_admin": false, 00:11:50.965 "nvme_io": false, 00:11:50.965 "nvme_io_md": false, 00:11:50.965 "write_zeroes": true, 00:11:50.965 "zcopy": false, 00:11:50.965 "get_zone_info": false, 00:11:50.965 "zone_management": false, 00:11:50.965 "zone_append": false, 00:11:50.965 "compare": false, 00:11:50.965 "compare_and_write": false, 00:11:50.965 "abort": false, 00:11:50.965 "seek_hole": false, 00:11:50.965 "seek_data": false, 00:11:50.965 "copy": false, 00:11:50.965 "nvme_iov_md": false 00:11:50.965 }, 00:11:50.965 "memory_domains": [ 00:11:50.965 { 00:11:50.965 "dma_device_id": "system", 00:11:50.965 "dma_device_type": 1 00:11:50.965 }, 00:11:50.965 { 00:11:50.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.965 "dma_device_type": 2 00:11:50.965 }, 00:11:50.965 { 00:11:50.965 "dma_device_id": "system", 00:11:50.965 "dma_device_type": 1 00:11:50.965 }, 00:11:50.965 { 00:11:50.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.965 "dma_device_type": 2 00:11:50.965 }, 00:11:50.965 { 00:11:50.965 "dma_device_id": "system", 00:11:50.965 "dma_device_type": 1 00:11:50.965 }, 00:11:50.965 { 00:11:50.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.965 "dma_device_type": 2 00:11:50.965 }, 00:11:50.965 { 00:11:50.965 "dma_device_id": "system", 00:11:50.965 "dma_device_type": 1 00:11:50.965 }, 00:11:50.965 { 00:11:50.965 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:50.965 "dma_device_type": 2 00:11:50.965 } 00:11:50.965 ], 00:11:50.965 "driver_specific": { 00:11:50.965 "raid": { 00:11:50.965 "uuid": "f4def10c-b15b-4151-ae36-9cc8836df60f", 00:11:50.965 "strip_size_kb": 64, 00:11:50.965 "state": "online", 00:11:50.965 "raid_level": "concat", 00:11:50.965 "superblock": false, 00:11:50.965 "num_base_bdevs": 4, 00:11:50.965 "num_base_bdevs_discovered": 4, 00:11:50.965 "num_base_bdevs_operational": 4, 00:11:50.965 "base_bdevs_list": [ 00:11:50.965 { 00:11:50.965 "name": "NewBaseBdev", 00:11:50.965 "uuid": "18b28fc3-878e-4831-944e-c33de985871f", 00:11:50.965 "is_configured": true, 00:11:50.965 "data_offset": 0, 00:11:50.965 "data_size": 65536 00:11:50.965 }, 00:11:50.965 { 00:11:50.965 "name": "BaseBdev2", 00:11:50.965 "uuid": "b853198f-5c60-4c17-a995-7d2d95b25616", 00:11:50.965 "is_configured": true, 00:11:50.965 "data_offset": 0, 00:11:50.965 "data_size": 65536 00:11:50.965 }, 00:11:50.965 { 00:11:50.965 "name": "BaseBdev3", 00:11:50.965 "uuid": "746cde11-832e-4a80-bf0a-725281f646b9", 00:11:50.965 "is_configured": true, 00:11:50.965 "data_offset": 0, 00:11:50.965 "data_size": 65536 00:11:50.965 }, 00:11:50.965 { 00:11:50.965 "name": "BaseBdev4", 00:11:50.965 "uuid": "3b8df76e-8946-416f-aaf8-661404a8f8dd", 00:11:50.965 "is_configured": true, 00:11:50.965 "data_offset": 0, 00:11:50.965 "data_size": 65536 00:11:50.965 } 00:11:50.965 ] 00:11:50.965 } 00:11:50.965 } 00:11:50.965 }' 00:11:50.965 03:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:50.965 03:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:50.965 BaseBdev2 00:11:50.965 BaseBdev3 00:11:50.965 BaseBdev4' 00:11:50.965 03:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.965 03:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:50.965 03:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.965 03:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.965 03:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:50.965 03:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.965 03:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.965 03:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:50.965 03:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:50.965 03:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:50.965 03:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:50.965 03:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:50.965 03:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:50.965 03:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.965 03:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:50.965 03:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.225 03:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:51.225 03:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:51.225 03:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:51.225 03:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:51.225 03:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:51.225 03:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.225 03:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.225 03:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.225 03:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:51.225 03:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:51.225 03:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:51.225 03:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:51.225 03:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:51.225 03:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.225 03:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.225 03:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.225 03:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:51.225 03:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:51.225 03:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:51.225 03:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:51.225 03:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.225 [2024-10-09 03:14:34.381042] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:51.225 [2024-10-09 03:14:34.381207] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:51.225 [2024-10-09 03:14:34.381379] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:51.225 [2024-10-09 03:14:34.381503] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:51.225 [2024-10-09 03:14:34.381560] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:51.225 03:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:51.225 03:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71431 00:11:51.225 03:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 71431 ']' 00:11:51.225 03:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 71431 00:11:51.225 03:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:11:51.225 03:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:51.225 03:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 71431 00:11:51.225 killing process with pid 71431 00:11:51.225 03:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:51.225 03:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:51.225 03:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 71431' 00:11:51.225 03:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 71431 00:11:51.225 [2024-10-09 03:14:34.431770] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:51.225 03:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 71431 00:11:51.795 [2024-10-09 03:14:34.952312] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:53.712 ************************************ 00:11:53.712 END TEST raid_state_function_test 00:11:53.712 ************************************ 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:53.712 00:11:53.712 real 0m12.830s 00:11:53.712 user 0m19.667s 00:11:53.712 sys 0m2.321s 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.712 03:14:36 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:11:53.712 03:14:36 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:11:53.712 03:14:36 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:53.712 03:14:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:53.712 ************************************ 00:11:53.712 START TEST raid_state_function_test_sb 00:11:53.712 ************************************ 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test concat 4 true 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=72119 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 72119' 00:11:53.712 Process raid pid: 72119 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 72119 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 72119 ']' 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:53.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:53.712 03:14:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:53.712 [2024-10-09 03:14:36.790887] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:11:53.712 [2024-10-09 03:14:36.791058] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:53.712 [2024-10-09 03:14:36.965961] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:53.973 [2024-10-09 03:14:37.228948] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.232 [2024-10-09 03:14:37.474957] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:54.232 [2024-10-09 03:14:37.474993] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:54.492 03:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:54.492 03:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:11:54.492 03:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:54.492 03:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.492 03:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.492 [2024-10-09 03:14:37.646681] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:54.492 [2024-10-09 03:14:37.646816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:54.492 [2024-10-09 03:14:37.646867] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:54.492 [2024-10-09 03:14:37.646894] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:54.492 [2024-10-09 03:14:37.646912] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:54.492 [2024-10-09 03:14:37.646933] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:54.492 [2024-10-09 03:14:37.646950] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:54.492 [2024-10-09 03:14:37.646971] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:54.492 03:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.492 03:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:54.492 03:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:54.492 03:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:54.492 03:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:54.492 03:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:54.492 03:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:54.492 03:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:54.492 03:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:54.492 03:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:54.492 03:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:54.492 03:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:54.492 03:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:54.492 03:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:54.492 03:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:54.492 03:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:54.492 03:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:54.492 "name": "Existed_Raid", 00:11:54.492 "uuid": "12904eaa-d19a-4c15-aa4f-dd1713a93a8d", 00:11:54.492 "strip_size_kb": 64, 00:11:54.492 "state": "configuring", 00:11:54.492 "raid_level": "concat", 00:11:54.492 "superblock": true, 00:11:54.492 "num_base_bdevs": 4, 00:11:54.492 "num_base_bdevs_discovered": 0, 00:11:54.492 "num_base_bdevs_operational": 4, 00:11:54.492 "base_bdevs_list": [ 00:11:54.492 { 00:11:54.492 "name": "BaseBdev1", 00:11:54.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.492 "is_configured": false, 00:11:54.492 "data_offset": 0, 00:11:54.492 "data_size": 0 00:11:54.492 }, 00:11:54.492 { 00:11:54.492 "name": "BaseBdev2", 00:11:54.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.492 "is_configured": false, 00:11:54.492 "data_offset": 0, 00:11:54.492 "data_size": 0 00:11:54.492 }, 00:11:54.492 { 00:11:54.492 "name": "BaseBdev3", 00:11:54.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.492 "is_configured": false, 00:11:54.492 "data_offset": 0, 00:11:54.492 "data_size": 0 00:11:54.492 }, 00:11:54.492 { 00:11:54.492 "name": "BaseBdev4", 00:11:54.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:54.492 "is_configured": false, 00:11:54.492 "data_offset": 0, 00:11:54.492 "data_size": 0 00:11:54.492 } 00:11:54.492 ] 00:11:54.492 }' 00:11:54.492 03:14:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:54.492 03:14:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.062 [2024-10-09 03:14:38.065981] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:55.062 [2024-10-09 03:14:38.066126] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.062 [2024-10-09 03:14:38.077963] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:55.062 [2024-10-09 03:14:38.078053] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:55.062 [2024-10-09 03:14:38.078080] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:55.062 [2024-10-09 03:14:38.078103] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:55.062 [2024-10-09 03:14:38.078122] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:55.062 [2024-10-09 03:14:38.078144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:55.062 [2024-10-09 03:14:38.078162] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:55.062 [2024-10-09 03:14:38.078183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.062 BaseBdev1 00:11:55.062 [2024-10-09 03:14:38.147023] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.062 [ 00:11:55.062 { 00:11:55.062 "name": "BaseBdev1", 00:11:55.062 "aliases": [ 00:11:55.062 "e3bc372b-58c2-43c1-89d1-41b822f48a8d" 00:11:55.062 ], 00:11:55.062 "product_name": "Malloc disk", 00:11:55.062 "block_size": 512, 00:11:55.062 "num_blocks": 65536, 00:11:55.062 "uuid": "e3bc372b-58c2-43c1-89d1-41b822f48a8d", 00:11:55.062 "assigned_rate_limits": { 00:11:55.062 "rw_ios_per_sec": 0, 00:11:55.062 "rw_mbytes_per_sec": 0, 00:11:55.062 "r_mbytes_per_sec": 0, 00:11:55.062 "w_mbytes_per_sec": 0 00:11:55.062 }, 00:11:55.062 "claimed": true, 00:11:55.062 "claim_type": "exclusive_write", 00:11:55.062 "zoned": false, 00:11:55.062 "supported_io_types": { 00:11:55.062 "read": true, 00:11:55.062 "write": true, 00:11:55.062 "unmap": true, 00:11:55.062 "flush": true, 00:11:55.062 "reset": true, 00:11:55.062 "nvme_admin": false, 00:11:55.062 "nvme_io": false, 00:11:55.062 "nvme_io_md": false, 00:11:55.062 "write_zeroes": true, 00:11:55.062 "zcopy": true, 00:11:55.062 "get_zone_info": false, 00:11:55.062 "zone_management": false, 00:11:55.062 "zone_append": false, 00:11:55.062 "compare": false, 00:11:55.062 "compare_and_write": false, 00:11:55.062 "abort": true, 00:11:55.062 "seek_hole": false, 00:11:55.062 "seek_data": false, 00:11:55.062 "copy": true, 00:11:55.062 "nvme_iov_md": false 00:11:55.062 }, 00:11:55.062 "memory_domains": [ 00:11:55.062 { 00:11:55.062 "dma_device_id": "system", 00:11:55.062 "dma_device_type": 1 00:11:55.062 }, 00:11:55.062 { 00:11:55.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.062 "dma_device_type": 2 00:11:55.062 } 00:11:55.062 ], 00:11:55.062 "driver_specific": {} 00:11:55.062 } 00:11:55.062 ] 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.062 03:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.062 "name": "Existed_Raid", 00:11:55.062 "uuid": "19f0cc2d-2693-485a-a4e0-47227164c72d", 00:11:55.062 "strip_size_kb": 64, 00:11:55.062 "state": "configuring", 00:11:55.062 "raid_level": "concat", 00:11:55.062 "superblock": true, 00:11:55.062 "num_base_bdevs": 4, 00:11:55.062 "num_base_bdevs_discovered": 1, 00:11:55.062 "num_base_bdevs_operational": 4, 00:11:55.062 "base_bdevs_list": [ 00:11:55.062 { 00:11:55.062 "name": "BaseBdev1", 00:11:55.063 "uuid": "e3bc372b-58c2-43c1-89d1-41b822f48a8d", 00:11:55.063 "is_configured": true, 00:11:55.063 "data_offset": 2048, 00:11:55.063 "data_size": 63488 00:11:55.063 }, 00:11:55.063 { 00:11:55.063 "name": "BaseBdev2", 00:11:55.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.063 "is_configured": false, 00:11:55.063 "data_offset": 0, 00:11:55.063 "data_size": 0 00:11:55.063 }, 00:11:55.063 { 00:11:55.063 "name": "BaseBdev3", 00:11:55.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.063 "is_configured": false, 00:11:55.063 "data_offset": 0, 00:11:55.063 "data_size": 0 00:11:55.063 }, 00:11:55.063 { 00:11:55.063 "name": "BaseBdev4", 00:11:55.063 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.063 "is_configured": false, 00:11:55.063 "data_offset": 0, 00:11:55.063 "data_size": 0 00:11:55.063 } 00:11:55.063 ] 00:11:55.063 }' 00:11:55.063 03:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.063 03:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.632 03:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:55.632 03:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.632 03:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.632 [2024-10-09 03:14:38.650256] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:55.632 [2024-10-09 03:14:38.650428] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:55.632 03:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.632 03:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:55.632 03:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.632 03:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.632 [2024-10-09 03:14:38.662268] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:55.633 [2024-10-09 03:14:38.664409] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:55.633 [2024-10-09 03:14:38.664491] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:55.633 [2024-10-09 03:14:38.664520] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:55.633 [2024-10-09 03:14:38.664544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:55.633 [2024-10-09 03:14:38.664562] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:55.633 [2024-10-09 03:14:38.664582] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:55.633 03:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.633 03:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:55.633 03:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:55.633 03:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:55.633 03:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.633 03:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.633 03:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:55.633 03:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.633 03:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.633 03:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.633 03:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.633 03:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.633 03:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.633 03:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.633 03:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.633 03:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.633 03:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:55.633 03:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.633 03:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:55.633 "name": "Existed_Raid", 00:11:55.633 "uuid": "94e0fdbb-3605-41d1-98d2-789719efff4b", 00:11:55.633 "strip_size_kb": 64, 00:11:55.633 "state": "configuring", 00:11:55.633 "raid_level": "concat", 00:11:55.633 "superblock": true, 00:11:55.633 "num_base_bdevs": 4, 00:11:55.633 "num_base_bdevs_discovered": 1, 00:11:55.633 "num_base_bdevs_operational": 4, 00:11:55.633 "base_bdevs_list": [ 00:11:55.633 { 00:11:55.633 "name": "BaseBdev1", 00:11:55.633 "uuid": "e3bc372b-58c2-43c1-89d1-41b822f48a8d", 00:11:55.633 "is_configured": true, 00:11:55.633 "data_offset": 2048, 00:11:55.633 "data_size": 63488 00:11:55.633 }, 00:11:55.633 { 00:11:55.633 "name": "BaseBdev2", 00:11:55.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.633 "is_configured": false, 00:11:55.633 "data_offset": 0, 00:11:55.633 "data_size": 0 00:11:55.633 }, 00:11:55.633 { 00:11:55.633 "name": "BaseBdev3", 00:11:55.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.633 "is_configured": false, 00:11:55.633 "data_offset": 0, 00:11:55.633 "data_size": 0 00:11:55.633 }, 00:11:55.633 { 00:11:55.633 "name": "BaseBdev4", 00:11:55.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:55.633 "is_configured": false, 00:11:55.633 "data_offset": 0, 00:11:55.633 "data_size": 0 00:11:55.633 } 00:11:55.633 ] 00:11:55.633 }' 00:11:55.633 03:14:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:55.633 03:14:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.893 03:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:55.893 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.893 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.893 [2024-10-09 03:14:39.148926] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:55.893 BaseBdev2 00:11:55.893 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.893 03:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:55.893 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:55.893 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:55.893 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:55.893 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:55.893 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:55.893 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:55.893 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.893 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.893 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.893 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:55.893 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.893 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.893 [ 00:11:55.893 { 00:11:55.893 "name": "BaseBdev2", 00:11:55.893 "aliases": [ 00:11:55.893 "737e93ff-dfb3-4192-a056-a812e989af70" 00:11:55.893 ], 00:11:55.893 "product_name": "Malloc disk", 00:11:55.893 "block_size": 512, 00:11:55.893 "num_blocks": 65536, 00:11:55.893 "uuid": "737e93ff-dfb3-4192-a056-a812e989af70", 00:11:55.893 "assigned_rate_limits": { 00:11:55.893 "rw_ios_per_sec": 0, 00:11:55.893 "rw_mbytes_per_sec": 0, 00:11:55.893 "r_mbytes_per_sec": 0, 00:11:55.893 "w_mbytes_per_sec": 0 00:11:55.893 }, 00:11:55.893 "claimed": true, 00:11:55.893 "claim_type": "exclusive_write", 00:11:55.893 "zoned": false, 00:11:55.893 "supported_io_types": { 00:11:55.893 "read": true, 00:11:55.893 "write": true, 00:11:55.893 "unmap": true, 00:11:55.893 "flush": true, 00:11:55.893 "reset": true, 00:11:55.893 "nvme_admin": false, 00:11:55.893 "nvme_io": false, 00:11:55.893 "nvme_io_md": false, 00:11:55.893 "write_zeroes": true, 00:11:55.893 "zcopy": true, 00:11:55.893 "get_zone_info": false, 00:11:55.893 "zone_management": false, 00:11:55.893 "zone_append": false, 00:11:55.893 "compare": false, 00:11:55.893 "compare_and_write": false, 00:11:55.893 "abort": true, 00:11:55.893 "seek_hole": false, 00:11:55.893 "seek_data": false, 00:11:55.893 "copy": true, 00:11:55.893 "nvme_iov_md": false 00:11:55.893 }, 00:11:55.893 "memory_domains": [ 00:11:55.893 { 00:11:55.893 "dma_device_id": "system", 00:11:55.893 "dma_device_type": 1 00:11:55.893 }, 00:11:55.893 { 00:11:55.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:55.893 "dma_device_type": 2 00:11:55.893 } 00:11:55.893 ], 00:11:55.893 "driver_specific": {} 00:11:55.893 } 00:11:55.893 ] 00:11:55.893 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:55.893 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:55.893 03:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:55.893 03:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:55.893 03:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:55.893 03:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:55.893 03:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:55.893 03:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:55.893 03:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:55.893 03:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:55.893 03:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:55.893 03:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:55.893 03:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:55.893 03:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:55.893 03:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:55.893 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:55.893 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.893 03:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.153 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.153 03:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.153 "name": "Existed_Raid", 00:11:56.153 "uuid": "94e0fdbb-3605-41d1-98d2-789719efff4b", 00:11:56.153 "strip_size_kb": 64, 00:11:56.153 "state": "configuring", 00:11:56.153 "raid_level": "concat", 00:11:56.153 "superblock": true, 00:11:56.153 "num_base_bdevs": 4, 00:11:56.153 "num_base_bdevs_discovered": 2, 00:11:56.153 "num_base_bdevs_operational": 4, 00:11:56.153 "base_bdevs_list": [ 00:11:56.153 { 00:11:56.153 "name": "BaseBdev1", 00:11:56.153 "uuid": "e3bc372b-58c2-43c1-89d1-41b822f48a8d", 00:11:56.153 "is_configured": true, 00:11:56.153 "data_offset": 2048, 00:11:56.153 "data_size": 63488 00:11:56.153 }, 00:11:56.153 { 00:11:56.153 "name": "BaseBdev2", 00:11:56.153 "uuid": "737e93ff-dfb3-4192-a056-a812e989af70", 00:11:56.153 "is_configured": true, 00:11:56.153 "data_offset": 2048, 00:11:56.153 "data_size": 63488 00:11:56.153 }, 00:11:56.153 { 00:11:56.153 "name": "BaseBdev3", 00:11:56.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.153 "is_configured": false, 00:11:56.153 "data_offset": 0, 00:11:56.153 "data_size": 0 00:11:56.153 }, 00:11:56.153 { 00:11:56.153 "name": "BaseBdev4", 00:11:56.153 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.153 "is_configured": false, 00:11:56.153 "data_offset": 0, 00:11:56.153 "data_size": 0 00:11:56.153 } 00:11:56.153 ] 00:11:56.153 }' 00:11:56.153 03:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.153 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.413 03:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:56.413 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.413 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.413 [2024-10-09 03:14:39.706765] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:56.413 BaseBdev3 00:11:56.413 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.413 03:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:56.413 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:56.413 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:56.413 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:56.413 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:56.413 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:56.413 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:56.413 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.413 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.673 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.673 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:56.673 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.673 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.673 [ 00:11:56.673 { 00:11:56.673 "name": "BaseBdev3", 00:11:56.673 "aliases": [ 00:11:56.673 "194d520b-4627-49f3-bc96-eca10d5fbbd8" 00:11:56.673 ], 00:11:56.673 "product_name": "Malloc disk", 00:11:56.673 "block_size": 512, 00:11:56.673 "num_blocks": 65536, 00:11:56.673 "uuid": "194d520b-4627-49f3-bc96-eca10d5fbbd8", 00:11:56.673 "assigned_rate_limits": { 00:11:56.673 "rw_ios_per_sec": 0, 00:11:56.673 "rw_mbytes_per_sec": 0, 00:11:56.673 "r_mbytes_per_sec": 0, 00:11:56.673 "w_mbytes_per_sec": 0 00:11:56.673 }, 00:11:56.673 "claimed": true, 00:11:56.673 "claim_type": "exclusive_write", 00:11:56.673 "zoned": false, 00:11:56.673 "supported_io_types": { 00:11:56.673 "read": true, 00:11:56.673 "write": true, 00:11:56.673 "unmap": true, 00:11:56.673 "flush": true, 00:11:56.673 "reset": true, 00:11:56.673 "nvme_admin": false, 00:11:56.673 "nvme_io": false, 00:11:56.673 "nvme_io_md": false, 00:11:56.673 "write_zeroes": true, 00:11:56.673 "zcopy": true, 00:11:56.673 "get_zone_info": false, 00:11:56.673 "zone_management": false, 00:11:56.673 "zone_append": false, 00:11:56.673 "compare": false, 00:11:56.673 "compare_and_write": false, 00:11:56.673 "abort": true, 00:11:56.673 "seek_hole": false, 00:11:56.673 "seek_data": false, 00:11:56.673 "copy": true, 00:11:56.673 "nvme_iov_md": false 00:11:56.673 }, 00:11:56.673 "memory_domains": [ 00:11:56.673 { 00:11:56.673 "dma_device_id": "system", 00:11:56.673 "dma_device_type": 1 00:11:56.673 }, 00:11:56.673 { 00:11:56.673 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.673 "dma_device_type": 2 00:11:56.673 } 00:11:56.673 ], 00:11:56.673 "driver_specific": {} 00:11:56.673 } 00:11:56.673 ] 00:11:56.673 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.673 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:56.673 03:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:56.673 03:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:56.673 03:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:56.674 03:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.674 03:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.674 03:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:56.674 03:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.674 03:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.674 03:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.674 03:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.674 03:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.674 03:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.674 03:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.674 03:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.674 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.674 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.674 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:56.674 03:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.674 "name": "Existed_Raid", 00:11:56.674 "uuid": "94e0fdbb-3605-41d1-98d2-789719efff4b", 00:11:56.674 "strip_size_kb": 64, 00:11:56.674 "state": "configuring", 00:11:56.674 "raid_level": "concat", 00:11:56.674 "superblock": true, 00:11:56.674 "num_base_bdevs": 4, 00:11:56.674 "num_base_bdevs_discovered": 3, 00:11:56.674 "num_base_bdevs_operational": 4, 00:11:56.674 "base_bdevs_list": [ 00:11:56.674 { 00:11:56.674 "name": "BaseBdev1", 00:11:56.674 "uuid": "e3bc372b-58c2-43c1-89d1-41b822f48a8d", 00:11:56.674 "is_configured": true, 00:11:56.674 "data_offset": 2048, 00:11:56.674 "data_size": 63488 00:11:56.674 }, 00:11:56.674 { 00:11:56.674 "name": "BaseBdev2", 00:11:56.674 "uuid": "737e93ff-dfb3-4192-a056-a812e989af70", 00:11:56.674 "is_configured": true, 00:11:56.674 "data_offset": 2048, 00:11:56.674 "data_size": 63488 00:11:56.674 }, 00:11:56.674 { 00:11:56.674 "name": "BaseBdev3", 00:11:56.674 "uuid": "194d520b-4627-49f3-bc96-eca10d5fbbd8", 00:11:56.674 "is_configured": true, 00:11:56.674 "data_offset": 2048, 00:11:56.674 "data_size": 63488 00:11:56.674 }, 00:11:56.674 { 00:11:56.674 "name": "BaseBdev4", 00:11:56.674 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.674 "is_configured": false, 00:11:56.674 "data_offset": 0, 00:11:56.674 "data_size": 0 00:11:56.674 } 00:11:56.674 ] 00:11:56.674 }' 00:11:56.674 03:14:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.674 03:14:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:56.934 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:56.934 03:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:56.934 03:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.194 [2024-10-09 03:14:40.258582] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:57.194 [2024-10-09 03:14:40.259015] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:57.194 [2024-10-09 03:14:40.259071] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:11:57.194 [2024-10-09 03:14:40.259406] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:57.194 BaseBdev4 00:11:57.194 [2024-10-09 03:14:40.259597] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:57.194 [2024-10-09 03:14:40.259614] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:57.194 [2024-10-09 03:14:40.259763] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:57.194 03:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.194 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:57.194 03:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:57.194 03:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:57.194 03:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:57.194 03:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:57.194 03:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:57.194 03:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:57.194 03:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.194 03:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.194 03:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.194 03:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:57.194 03:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.194 03:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.194 [ 00:11:57.194 { 00:11:57.194 "name": "BaseBdev4", 00:11:57.194 "aliases": [ 00:11:57.194 "a926e790-ebbe-44ab-9269-df71a1f75b76" 00:11:57.194 ], 00:11:57.194 "product_name": "Malloc disk", 00:11:57.194 "block_size": 512, 00:11:57.194 "num_blocks": 65536, 00:11:57.194 "uuid": "a926e790-ebbe-44ab-9269-df71a1f75b76", 00:11:57.194 "assigned_rate_limits": { 00:11:57.194 "rw_ios_per_sec": 0, 00:11:57.194 "rw_mbytes_per_sec": 0, 00:11:57.194 "r_mbytes_per_sec": 0, 00:11:57.194 "w_mbytes_per_sec": 0 00:11:57.194 }, 00:11:57.194 "claimed": true, 00:11:57.194 "claim_type": "exclusive_write", 00:11:57.194 "zoned": false, 00:11:57.194 "supported_io_types": { 00:11:57.194 "read": true, 00:11:57.194 "write": true, 00:11:57.194 "unmap": true, 00:11:57.194 "flush": true, 00:11:57.194 "reset": true, 00:11:57.194 "nvme_admin": false, 00:11:57.194 "nvme_io": false, 00:11:57.194 "nvme_io_md": false, 00:11:57.194 "write_zeroes": true, 00:11:57.194 "zcopy": true, 00:11:57.194 "get_zone_info": false, 00:11:57.194 "zone_management": false, 00:11:57.194 "zone_append": false, 00:11:57.194 "compare": false, 00:11:57.194 "compare_and_write": false, 00:11:57.194 "abort": true, 00:11:57.194 "seek_hole": false, 00:11:57.194 "seek_data": false, 00:11:57.194 "copy": true, 00:11:57.194 "nvme_iov_md": false 00:11:57.194 }, 00:11:57.194 "memory_domains": [ 00:11:57.194 { 00:11:57.194 "dma_device_id": "system", 00:11:57.194 "dma_device_type": 1 00:11:57.194 }, 00:11:57.194 { 00:11:57.194 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.194 "dma_device_type": 2 00:11:57.194 } 00:11:57.194 ], 00:11:57.194 "driver_specific": {} 00:11:57.194 } 00:11:57.194 ] 00:11:57.194 03:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.194 03:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:57.194 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:57.194 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:57.194 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:11:57.195 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.195 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:57.195 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:57.195 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.195 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.195 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.195 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.195 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.195 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.195 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.195 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.195 03:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.195 03:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.195 03:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.195 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.195 "name": "Existed_Raid", 00:11:57.195 "uuid": "94e0fdbb-3605-41d1-98d2-789719efff4b", 00:11:57.195 "strip_size_kb": 64, 00:11:57.195 "state": "online", 00:11:57.195 "raid_level": "concat", 00:11:57.195 "superblock": true, 00:11:57.195 "num_base_bdevs": 4, 00:11:57.195 "num_base_bdevs_discovered": 4, 00:11:57.195 "num_base_bdevs_operational": 4, 00:11:57.195 "base_bdevs_list": [ 00:11:57.195 { 00:11:57.195 "name": "BaseBdev1", 00:11:57.195 "uuid": "e3bc372b-58c2-43c1-89d1-41b822f48a8d", 00:11:57.195 "is_configured": true, 00:11:57.195 "data_offset": 2048, 00:11:57.195 "data_size": 63488 00:11:57.195 }, 00:11:57.195 { 00:11:57.195 "name": "BaseBdev2", 00:11:57.195 "uuid": "737e93ff-dfb3-4192-a056-a812e989af70", 00:11:57.195 "is_configured": true, 00:11:57.195 "data_offset": 2048, 00:11:57.195 "data_size": 63488 00:11:57.195 }, 00:11:57.195 { 00:11:57.195 "name": "BaseBdev3", 00:11:57.195 "uuid": "194d520b-4627-49f3-bc96-eca10d5fbbd8", 00:11:57.195 "is_configured": true, 00:11:57.195 "data_offset": 2048, 00:11:57.195 "data_size": 63488 00:11:57.195 }, 00:11:57.195 { 00:11:57.195 "name": "BaseBdev4", 00:11:57.195 "uuid": "a926e790-ebbe-44ab-9269-df71a1f75b76", 00:11:57.195 "is_configured": true, 00:11:57.195 "data_offset": 2048, 00:11:57.195 "data_size": 63488 00:11:57.195 } 00:11:57.195 ] 00:11:57.195 }' 00:11:57.195 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.195 03:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.455 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:57.455 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:57.455 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:57.455 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:57.455 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:57.455 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:57.455 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:57.455 03:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.455 03:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.455 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:57.455 [2024-10-09 03:14:40.746191] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:57.715 03:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.715 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:57.715 "name": "Existed_Raid", 00:11:57.715 "aliases": [ 00:11:57.715 "94e0fdbb-3605-41d1-98d2-789719efff4b" 00:11:57.715 ], 00:11:57.715 "product_name": "Raid Volume", 00:11:57.715 "block_size": 512, 00:11:57.715 "num_blocks": 253952, 00:11:57.715 "uuid": "94e0fdbb-3605-41d1-98d2-789719efff4b", 00:11:57.715 "assigned_rate_limits": { 00:11:57.715 "rw_ios_per_sec": 0, 00:11:57.715 "rw_mbytes_per_sec": 0, 00:11:57.715 "r_mbytes_per_sec": 0, 00:11:57.715 "w_mbytes_per_sec": 0 00:11:57.715 }, 00:11:57.715 "claimed": false, 00:11:57.715 "zoned": false, 00:11:57.715 "supported_io_types": { 00:11:57.715 "read": true, 00:11:57.715 "write": true, 00:11:57.715 "unmap": true, 00:11:57.715 "flush": true, 00:11:57.715 "reset": true, 00:11:57.715 "nvme_admin": false, 00:11:57.715 "nvme_io": false, 00:11:57.715 "nvme_io_md": false, 00:11:57.715 "write_zeroes": true, 00:11:57.715 "zcopy": false, 00:11:57.715 "get_zone_info": false, 00:11:57.715 "zone_management": false, 00:11:57.715 "zone_append": false, 00:11:57.715 "compare": false, 00:11:57.715 "compare_and_write": false, 00:11:57.715 "abort": false, 00:11:57.715 "seek_hole": false, 00:11:57.715 "seek_data": false, 00:11:57.715 "copy": false, 00:11:57.715 "nvme_iov_md": false 00:11:57.715 }, 00:11:57.715 "memory_domains": [ 00:11:57.715 { 00:11:57.715 "dma_device_id": "system", 00:11:57.715 "dma_device_type": 1 00:11:57.715 }, 00:11:57.715 { 00:11:57.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.715 "dma_device_type": 2 00:11:57.715 }, 00:11:57.715 { 00:11:57.715 "dma_device_id": "system", 00:11:57.715 "dma_device_type": 1 00:11:57.715 }, 00:11:57.715 { 00:11:57.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.715 "dma_device_type": 2 00:11:57.715 }, 00:11:57.715 { 00:11:57.715 "dma_device_id": "system", 00:11:57.715 "dma_device_type": 1 00:11:57.715 }, 00:11:57.715 { 00:11:57.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.715 "dma_device_type": 2 00:11:57.715 }, 00:11:57.715 { 00:11:57.715 "dma_device_id": "system", 00:11:57.715 "dma_device_type": 1 00:11:57.715 }, 00:11:57.715 { 00:11:57.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.715 "dma_device_type": 2 00:11:57.715 } 00:11:57.715 ], 00:11:57.715 "driver_specific": { 00:11:57.715 "raid": { 00:11:57.715 "uuid": "94e0fdbb-3605-41d1-98d2-789719efff4b", 00:11:57.715 "strip_size_kb": 64, 00:11:57.715 "state": "online", 00:11:57.715 "raid_level": "concat", 00:11:57.715 "superblock": true, 00:11:57.715 "num_base_bdevs": 4, 00:11:57.715 "num_base_bdevs_discovered": 4, 00:11:57.715 "num_base_bdevs_operational": 4, 00:11:57.715 "base_bdevs_list": [ 00:11:57.715 { 00:11:57.715 "name": "BaseBdev1", 00:11:57.715 "uuid": "e3bc372b-58c2-43c1-89d1-41b822f48a8d", 00:11:57.715 "is_configured": true, 00:11:57.715 "data_offset": 2048, 00:11:57.715 "data_size": 63488 00:11:57.715 }, 00:11:57.715 { 00:11:57.715 "name": "BaseBdev2", 00:11:57.715 "uuid": "737e93ff-dfb3-4192-a056-a812e989af70", 00:11:57.715 "is_configured": true, 00:11:57.715 "data_offset": 2048, 00:11:57.715 "data_size": 63488 00:11:57.715 }, 00:11:57.715 { 00:11:57.715 "name": "BaseBdev3", 00:11:57.715 "uuid": "194d520b-4627-49f3-bc96-eca10d5fbbd8", 00:11:57.715 "is_configured": true, 00:11:57.715 "data_offset": 2048, 00:11:57.715 "data_size": 63488 00:11:57.715 }, 00:11:57.715 { 00:11:57.715 "name": "BaseBdev4", 00:11:57.715 "uuid": "a926e790-ebbe-44ab-9269-df71a1f75b76", 00:11:57.715 "is_configured": true, 00:11:57.715 "data_offset": 2048, 00:11:57.715 "data_size": 63488 00:11:57.715 } 00:11:57.715 ] 00:11:57.715 } 00:11:57.715 } 00:11:57.715 }' 00:11:57.715 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:57.715 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:57.715 BaseBdev2 00:11:57.715 BaseBdev3 00:11:57.715 BaseBdev4' 00:11:57.715 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.715 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:57.715 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.715 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.715 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:57.715 03:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.715 03:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.715 03:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.715 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.715 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.715 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.715 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:57.715 03:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.715 03:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.715 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.715 03:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.715 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.715 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.715 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.715 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.715 03:14:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:57.715 03:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.715 03:14:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.975 03:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.975 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.975 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.975 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:57.975 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:57.975 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:57.975 03:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.975 03:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.975 03:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.975 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:57.975 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:57.975 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:57.975 03:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.975 03:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.975 [2024-10-09 03:14:41.077296] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:57.975 [2024-10-09 03:14:41.077421] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:57.975 [2024-10-09 03:14:41.077504] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:57.975 03:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.975 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:57.975 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:11:57.975 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:57.975 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:11:57.975 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:11:57.975 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:11:57.975 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.975 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:11:57.975 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:57.975 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.975 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:57.975 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.975 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.975 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.975 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.975 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.975 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.975 03:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.975 03:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.975 03:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.975 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.975 "name": "Existed_Raid", 00:11:57.975 "uuid": "94e0fdbb-3605-41d1-98d2-789719efff4b", 00:11:57.975 "strip_size_kb": 64, 00:11:57.975 "state": "offline", 00:11:57.975 "raid_level": "concat", 00:11:57.975 "superblock": true, 00:11:57.975 "num_base_bdevs": 4, 00:11:57.975 "num_base_bdevs_discovered": 3, 00:11:57.975 "num_base_bdevs_operational": 3, 00:11:57.975 "base_bdevs_list": [ 00:11:57.975 { 00:11:57.975 "name": null, 00:11:57.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.975 "is_configured": false, 00:11:57.975 "data_offset": 0, 00:11:57.975 "data_size": 63488 00:11:57.975 }, 00:11:57.975 { 00:11:57.975 "name": "BaseBdev2", 00:11:57.975 "uuid": "737e93ff-dfb3-4192-a056-a812e989af70", 00:11:57.975 "is_configured": true, 00:11:57.975 "data_offset": 2048, 00:11:57.975 "data_size": 63488 00:11:57.975 }, 00:11:57.975 { 00:11:57.975 "name": "BaseBdev3", 00:11:57.975 "uuid": "194d520b-4627-49f3-bc96-eca10d5fbbd8", 00:11:57.975 "is_configured": true, 00:11:57.975 "data_offset": 2048, 00:11:57.975 "data_size": 63488 00:11:57.975 }, 00:11:57.975 { 00:11:57.975 "name": "BaseBdev4", 00:11:57.975 "uuid": "a926e790-ebbe-44ab-9269-df71a1f75b76", 00:11:57.975 "is_configured": true, 00:11:57.975 "data_offset": 2048, 00:11:57.975 "data_size": 63488 00:11:57.975 } 00:11:57.975 ] 00:11:57.975 }' 00:11:57.976 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.976 03:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.546 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:58.546 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:58.546 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:58.546 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.546 03:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.546 03:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.546 03:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.546 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:58.546 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:58.546 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:58.546 03:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.546 03:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.546 [2024-10-09 03:14:41.687045] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:58.546 03:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.546 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:58.546 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:58.546 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.546 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:58.546 03:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.546 03:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.546 03:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.546 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:58.546 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:58.546 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:58.546 03:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.546 03:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.546 [2024-10-09 03:14:41.844866] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:58.806 03:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.806 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:58.806 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:58.806 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.806 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:58.806 03:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.806 03:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.806 03:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.806 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:58.806 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:58.806 03:14:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:11:58.806 03:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.806 03:14:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:58.806 [2024-10-09 03:14:42.002643] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:11:58.806 [2024-10-09 03:14:42.002792] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:58.806 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.806 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:58.806 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:59.066 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:59.066 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.066 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.066 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.066 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.066 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:59.066 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:59.066 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:11:59.066 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:59.066 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:59.066 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:59.066 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.066 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.066 BaseBdev2 00:11:59.066 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.066 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:59.066 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:11:59.066 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:59.066 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:59.066 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:59.066 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:59.066 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:59.066 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.066 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.066 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.066 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:59.066 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.066 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.066 [ 00:11:59.066 { 00:11:59.066 "name": "BaseBdev2", 00:11:59.066 "aliases": [ 00:11:59.066 "c7986d09-d5f3-46d7-8576-3f670d9e498d" 00:11:59.066 ], 00:11:59.066 "product_name": "Malloc disk", 00:11:59.066 "block_size": 512, 00:11:59.066 "num_blocks": 65536, 00:11:59.066 "uuid": "c7986d09-d5f3-46d7-8576-3f670d9e498d", 00:11:59.066 "assigned_rate_limits": { 00:11:59.066 "rw_ios_per_sec": 0, 00:11:59.066 "rw_mbytes_per_sec": 0, 00:11:59.066 "r_mbytes_per_sec": 0, 00:11:59.066 "w_mbytes_per_sec": 0 00:11:59.066 }, 00:11:59.066 "claimed": false, 00:11:59.066 "zoned": false, 00:11:59.066 "supported_io_types": { 00:11:59.066 "read": true, 00:11:59.066 "write": true, 00:11:59.066 "unmap": true, 00:11:59.066 "flush": true, 00:11:59.066 "reset": true, 00:11:59.066 "nvme_admin": false, 00:11:59.066 "nvme_io": false, 00:11:59.066 "nvme_io_md": false, 00:11:59.066 "write_zeroes": true, 00:11:59.066 "zcopy": true, 00:11:59.066 "get_zone_info": false, 00:11:59.066 "zone_management": false, 00:11:59.066 "zone_append": false, 00:11:59.066 "compare": false, 00:11:59.066 "compare_and_write": false, 00:11:59.066 "abort": true, 00:11:59.066 "seek_hole": false, 00:11:59.066 "seek_data": false, 00:11:59.066 "copy": true, 00:11:59.066 "nvme_iov_md": false 00:11:59.067 }, 00:11:59.067 "memory_domains": [ 00:11:59.067 { 00:11:59.067 "dma_device_id": "system", 00:11:59.067 "dma_device_type": 1 00:11:59.067 }, 00:11:59.067 { 00:11:59.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.067 "dma_device_type": 2 00:11:59.067 } 00:11:59.067 ], 00:11:59.067 "driver_specific": {} 00:11:59.067 } 00:11:59.067 ] 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.067 BaseBdev3 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.067 [ 00:11:59.067 { 00:11:59.067 "name": "BaseBdev3", 00:11:59.067 "aliases": [ 00:11:59.067 "f36b37c8-e106-4ee8-934b-b469981c8abd" 00:11:59.067 ], 00:11:59.067 "product_name": "Malloc disk", 00:11:59.067 "block_size": 512, 00:11:59.067 "num_blocks": 65536, 00:11:59.067 "uuid": "f36b37c8-e106-4ee8-934b-b469981c8abd", 00:11:59.067 "assigned_rate_limits": { 00:11:59.067 "rw_ios_per_sec": 0, 00:11:59.067 "rw_mbytes_per_sec": 0, 00:11:59.067 "r_mbytes_per_sec": 0, 00:11:59.067 "w_mbytes_per_sec": 0 00:11:59.067 }, 00:11:59.067 "claimed": false, 00:11:59.067 "zoned": false, 00:11:59.067 "supported_io_types": { 00:11:59.067 "read": true, 00:11:59.067 "write": true, 00:11:59.067 "unmap": true, 00:11:59.067 "flush": true, 00:11:59.067 "reset": true, 00:11:59.067 "nvme_admin": false, 00:11:59.067 "nvme_io": false, 00:11:59.067 "nvme_io_md": false, 00:11:59.067 "write_zeroes": true, 00:11:59.067 "zcopy": true, 00:11:59.067 "get_zone_info": false, 00:11:59.067 "zone_management": false, 00:11:59.067 "zone_append": false, 00:11:59.067 "compare": false, 00:11:59.067 "compare_and_write": false, 00:11:59.067 "abort": true, 00:11:59.067 "seek_hole": false, 00:11:59.067 "seek_data": false, 00:11:59.067 "copy": true, 00:11:59.067 "nvme_iov_md": false 00:11:59.067 }, 00:11:59.067 "memory_domains": [ 00:11:59.067 { 00:11:59.067 "dma_device_id": "system", 00:11:59.067 "dma_device_type": 1 00:11:59.067 }, 00:11:59.067 { 00:11:59.067 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.067 "dma_device_type": 2 00:11:59.067 } 00:11:59.067 ], 00:11:59.067 "driver_specific": {} 00:11:59.067 } 00:11:59.067 ] 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.067 BaseBdev4 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.067 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.327 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.327 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:59.327 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.327 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.327 [ 00:11:59.327 { 00:11:59.327 "name": "BaseBdev4", 00:11:59.327 "aliases": [ 00:11:59.327 "810a08b5-229c-4348-8720-f2dc6fdd9ab0" 00:11:59.327 ], 00:11:59.327 "product_name": "Malloc disk", 00:11:59.327 "block_size": 512, 00:11:59.327 "num_blocks": 65536, 00:11:59.327 "uuid": "810a08b5-229c-4348-8720-f2dc6fdd9ab0", 00:11:59.327 "assigned_rate_limits": { 00:11:59.327 "rw_ios_per_sec": 0, 00:11:59.327 "rw_mbytes_per_sec": 0, 00:11:59.328 "r_mbytes_per_sec": 0, 00:11:59.328 "w_mbytes_per_sec": 0 00:11:59.328 }, 00:11:59.328 "claimed": false, 00:11:59.328 "zoned": false, 00:11:59.328 "supported_io_types": { 00:11:59.328 "read": true, 00:11:59.328 "write": true, 00:11:59.328 "unmap": true, 00:11:59.328 "flush": true, 00:11:59.328 "reset": true, 00:11:59.328 "nvme_admin": false, 00:11:59.328 "nvme_io": false, 00:11:59.328 "nvme_io_md": false, 00:11:59.328 "write_zeroes": true, 00:11:59.328 "zcopy": true, 00:11:59.328 "get_zone_info": false, 00:11:59.328 "zone_management": false, 00:11:59.328 "zone_append": false, 00:11:59.328 "compare": false, 00:11:59.328 "compare_and_write": false, 00:11:59.328 "abort": true, 00:11:59.328 "seek_hole": false, 00:11:59.328 "seek_data": false, 00:11:59.328 "copy": true, 00:11:59.328 "nvme_iov_md": false 00:11:59.328 }, 00:11:59.328 "memory_domains": [ 00:11:59.328 { 00:11:59.328 "dma_device_id": "system", 00:11:59.328 "dma_device_type": 1 00:11:59.328 }, 00:11:59.328 { 00:11:59.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.328 "dma_device_type": 2 00:11:59.328 } 00:11:59.328 ], 00:11:59.328 "driver_specific": {} 00:11:59.328 } 00:11:59.328 ] 00:11:59.328 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.328 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:11:59.328 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:59.328 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:59.328 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:59.328 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.328 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.328 [2024-10-09 03:14:42.406486] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:59.328 [2024-10-09 03:14:42.406616] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:59.328 [2024-10-09 03:14:42.406658] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:59.328 [2024-10-09 03:14:42.408690] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:59.328 [2024-10-09 03:14:42.408785] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:59.328 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.328 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:59.328 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.328 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.328 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:59.328 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.328 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.328 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.328 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.328 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.328 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.328 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.328 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.328 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.328 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.328 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.328 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.328 "name": "Existed_Raid", 00:11:59.328 "uuid": "775638bf-13bc-42dd-b7ed-617412ff61e2", 00:11:59.328 "strip_size_kb": 64, 00:11:59.328 "state": "configuring", 00:11:59.328 "raid_level": "concat", 00:11:59.328 "superblock": true, 00:11:59.328 "num_base_bdevs": 4, 00:11:59.328 "num_base_bdevs_discovered": 3, 00:11:59.328 "num_base_bdevs_operational": 4, 00:11:59.328 "base_bdevs_list": [ 00:11:59.328 { 00:11:59.328 "name": "BaseBdev1", 00:11:59.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.328 "is_configured": false, 00:11:59.328 "data_offset": 0, 00:11:59.328 "data_size": 0 00:11:59.328 }, 00:11:59.328 { 00:11:59.328 "name": "BaseBdev2", 00:11:59.328 "uuid": "c7986d09-d5f3-46d7-8576-3f670d9e498d", 00:11:59.328 "is_configured": true, 00:11:59.328 "data_offset": 2048, 00:11:59.328 "data_size": 63488 00:11:59.328 }, 00:11:59.328 { 00:11:59.328 "name": "BaseBdev3", 00:11:59.328 "uuid": "f36b37c8-e106-4ee8-934b-b469981c8abd", 00:11:59.328 "is_configured": true, 00:11:59.328 "data_offset": 2048, 00:11:59.328 "data_size": 63488 00:11:59.328 }, 00:11:59.328 { 00:11:59.328 "name": "BaseBdev4", 00:11:59.328 "uuid": "810a08b5-229c-4348-8720-f2dc6fdd9ab0", 00:11:59.328 "is_configured": true, 00:11:59.328 "data_offset": 2048, 00:11:59.328 "data_size": 63488 00:11:59.328 } 00:11:59.328 ] 00:11:59.328 }' 00:11:59.328 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.328 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.590 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:59.590 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.590 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.590 [2024-10-09 03:14:42.861898] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:59.590 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.590 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:11:59.590 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:59.590 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:59.590 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:59.590 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:59.590 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:59.590 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:59.590 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:59.590 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:59.590 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:59.590 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:59.590 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.590 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:59.590 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.854 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:59.854 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:59.854 "name": "Existed_Raid", 00:11:59.854 "uuid": "775638bf-13bc-42dd-b7ed-617412ff61e2", 00:11:59.854 "strip_size_kb": 64, 00:11:59.855 "state": "configuring", 00:11:59.855 "raid_level": "concat", 00:11:59.855 "superblock": true, 00:11:59.855 "num_base_bdevs": 4, 00:11:59.855 "num_base_bdevs_discovered": 2, 00:11:59.855 "num_base_bdevs_operational": 4, 00:11:59.855 "base_bdevs_list": [ 00:11:59.855 { 00:11:59.855 "name": "BaseBdev1", 00:11:59.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.855 "is_configured": false, 00:11:59.855 "data_offset": 0, 00:11:59.855 "data_size": 0 00:11:59.855 }, 00:11:59.855 { 00:11:59.855 "name": null, 00:11:59.855 "uuid": "c7986d09-d5f3-46d7-8576-3f670d9e498d", 00:11:59.855 "is_configured": false, 00:11:59.855 "data_offset": 0, 00:11:59.855 "data_size": 63488 00:11:59.855 }, 00:11:59.855 { 00:11:59.855 "name": "BaseBdev3", 00:11:59.855 "uuid": "f36b37c8-e106-4ee8-934b-b469981c8abd", 00:11:59.855 "is_configured": true, 00:11:59.855 "data_offset": 2048, 00:11:59.855 "data_size": 63488 00:11:59.855 }, 00:11:59.855 { 00:11:59.855 "name": "BaseBdev4", 00:11:59.855 "uuid": "810a08b5-229c-4348-8720-f2dc6fdd9ab0", 00:11:59.855 "is_configured": true, 00:11:59.855 "data_offset": 2048, 00:11:59.855 "data_size": 63488 00:11:59.855 } 00:11:59.855 ] 00:11:59.855 }' 00:11:59.855 03:14:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:59.855 03:14:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.115 [2024-10-09 03:14:43.366643] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:00.115 BaseBdev1 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.115 [ 00:12:00.115 { 00:12:00.115 "name": "BaseBdev1", 00:12:00.115 "aliases": [ 00:12:00.115 "e2fe77d2-69ef-4b6e-a27d-6d6e283b30ed" 00:12:00.115 ], 00:12:00.115 "product_name": "Malloc disk", 00:12:00.115 "block_size": 512, 00:12:00.115 "num_blocks": 65536, 00:12:00.115 "uuid": "e2fe77d2-69ef-4b6e-a27d-6d6e283b30ed", 00:12:00.115 "assigned_rate_limits": { 00:12:00.115 "rw_ios_per_sec": 0, 00:12:00.115 "rw_mbytes_per_sec": 0, 00:12:00.115 "r_mbytes_per_sec": 0, 00:12:00.115 "w_mbytes_per_sec": 0 00:12:00.115 }, 00:12:00.115 "claimed": true, 00:12:00.115 "claim_type": "exclusive_write", 00:12:00.115 "zoned": false, 00:12:00.115 "supported_io_types": { 00:12:00.115 "read": true, 00:12:00.115 "write": true, 00:12:00.115 "unmap": true, 00:12:00.115 "flush": true, 00:12:00.115 "reset": true, 00:12:00.115 "nvme_admin": false, 00:12:00.115 "nvme_io": false, 00:12:00.115 "nvme_io_md": false, 00:12:00.115 "write_zeroes": true, 00:12:00.115 "zcopy": true, 00:12:00.115 "get_zone_info": false, 00:12:00.115 "zone_management": false, 00:12:00.115 "zone_append": false, 00:12:00.115 "compare": false, 00:12:00.115 "compare_and_write": false, 00:12:00.115 "abort": true, 00:12:00.115 "seek_hole": false, 00:12:00.115 "seek_data": false, 00:12:00.115 "copy": true, 00:12:00.115 "nvme_iov_md": false 00:12:00.115 }, 00:12:00.115 "memory_domains": [ 00:12:00.115 { 00:12:00.115 "dma_device_id": "system", 00:12:00.115 "dma_device_type": 1 00:12:00.115 }, 00:12:00.115 { 00:12:00.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.115 "dma_device_type": 2 00:12:00.115 } 00:12:00.115 ], 00:12:00.115 "driver_specific": {} 00:12:00.115 } 00:12:00.115 ] 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.115 03:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.375 03:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.375 03:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.375 "name": "Existed_Raid", 00:12:00.375 "uuid": "775638bf-13bc-42dd-b7ed-617412ff61e2", 00:12:00.375 "strip_size_kb": 64, 00:12:00.375 "state": "configuring", 00:12:00.375 "raid_level": "concat", 00:12:00.375 "superblock": true, 00:12:00.375 "num_base_bdevs": 4, 00:12:00.375 "num_base_bdevs_discovered": 3, 00:12:00.375 "num_base_bdevs_operational": 4, 00:12:00.375 "base_bdevs_list": [ 00:12:00.375 { 00:12:00.375 "name": "BaseBdev1", 00:12:00.375 "uuid": "e2fe77d2-69ef-4b6e-a27d-6d6e283b30ed", 00:12:00.375 "is_configured": true, 00:12:00.375 "data_offset": 2048, 00:12:00.375 "data_size": 63488 00:12:00.375 }, 00:12:00.375 { 00:12:00.375 "name": null, 00:12:00.375 "uuid": "c7986d09-d5f3-46d7-8576-3f670d9e498d", 00:12:00.375 "is_configured": false, 00:12:00.375 "data_offset": 0, 00:12:00.375 "data_size": 63488 00:12:00.375 }, 00:12:00.375 { 00:12:00.375 "name": "BaseBdev3", 00:12:00.375 "uuid": "f36b37c8-e106-4ee8-934b-b469981c8abd", 00:12:00.375 "is_configured": true, 00:12:00.375 "data_offset": 2048, 00:12:00.375 "data_size": 63488 00:12:00.375 }, 00:12:00.375 { 00:12:00.375 "name": "BaseBdev4", 00:12:00.375 "uuid": "810a08b5-229c-4348-8720-f2dc6fdd9ab0", 00:12:00.375 "is_configured": true, 00:12:00.375 "data_offset": 2048, 00:12:00.375 "data_size": 63488 00:12:00.375 } 00:12:00.375 ] 00:12:00.375 }' 00:12:00.375 03:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.375 03:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.637 03:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:00.637 03:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.637 03:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.637 03:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.637 03:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.896 03:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:00.896 03:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:00.896 03:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.896 03:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.896 [2024-10-09 03:14:43.957829] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:00.896 03:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.896 03:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:00.896 03:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.896 03:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:00.896 03:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:00.896 03:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:00.896 03:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:00.896 03:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.896 03:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.896 03:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.896 03:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.896 03:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.896 03:14:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.896 03:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:00.896 03:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.896 03:14:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:00.896 03:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.896 "name": "Existed_Raid", 00:12:00.896 "uuid": "775638bf-13bc-42dd-b7ed-617412ff61e2", 00:12:00.896 "strip_size_kb": 64, 00:12:00.896 "state": "configuring", 00:12:00.896 "raid_level": "concat", 00:12:00.896 "superblock": true, 00:12:00.896 "num_base_bdevs": 4, 00:12:00.896 "num_base_bdevs_discovered": 2, 00:12:00.896 "num_base_bdevs_operational": 4, 00:12:00.896 "base_bdevs_list": [ 00:12:00.896 { 00:12:00.896 "name": "BaseBdev1", 00:12:00.896 "uuid": "e2fe77d2-69ef-4b6e-a27d-6d6e283b30ed", 00:12:00.896 "is_configured": true, 00:12:00.896 "data_offset": 2048, 00:12:00.896 "data_size": 63488 00:12:00.896 }, 00:12:00.896 { 00:12:00.896 "name": null, 00:12:00.896 "uuid": "c7986d09-d5f3-46d7-8576-3f670d9e498d", 00:12:00.896 "is_configured": false, 00:12:00.896 "data_offset": 0, 00:12:00.896 "data_size": 63488 00:12:00.896 }, 00:12:00.896 { 00:12:00.896 "name": null, 00:12:00.896 "uuid": "f36b37c8-e106-4ee8-934b-b469981c8abd", 00:12:00.896 "is_configured": false, 00:12:00.896 "data_offset": 0, 00:12:00.896 "data_size": 63488 00:12:00.896 }, 00:12:00.896 { 00:12:00.896 "name": "BaseBdev4", 00:12:00.896 "uuid": "810a08b5-229c-4348-8720-f2dc6fdd9ab0", 00:12:00.896 "is_configured": true, 00:12:00.896 "data_offset": 2048, 00:12:00.896 "data_size": 63488 00:12:00.896 } 00:12:00.896 ] 00:12:00.896 }' 00:12:00.896 03:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.896 03:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.156 03:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.156 03:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.156 03:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.156 03:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:01.156 03:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.156 03:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:01.156 03:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:01.156 03:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.156 03:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.156 [2024-10-09 03:14:44.421140] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:01.156 03:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.156 03:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:01.156 03:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.156 03:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.156 03:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:01.156 03:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.156 03:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.156 03:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.156 03:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.156 03:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.156 03:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.157 03:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.157 03:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.157 03:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.157 03:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.157 03:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.416 03:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.416 "name": "Existed_Raid", 00:12:01.416 "uuid": "775638bf-13bc-42dd-b7ed-617412ff61e2", 00:12:01.416 "strip_size_kb": 64, 00:12:01.416 "state": "configuring", 00:12:01.416 "raid_level": "concat", 00:12:01.416 "superblock": true, 00:12:01.416 "num_base_bdevs": 4, 00:12:01.416 "num_base_bdevs_discovered": 3, 00:12:01.416 "num_base_bdevs_operational": 4, 00:12:01.416 "base_bdevs_list": [ 00:12:01.416 { 00:12:01.416 "name": "BaseBdev1", 00:12:01.416 "uuid": "e2fe77d2-69ef-4b6e-a27d-6d6e283b30ed", 00:12:01.416 "is_configured": true, 00:12:01.416 "data_offset": 2048, 00:12:01.416 "data_size": 63488 00:12:01.416 }, 00:12:01.416 { 00:12:01.416 "name": null, 00:12:01.416 "uuid": "c7986d09-d5f3-46d7-8576-3f670d9e498d", 00:12:01.416 "is_configured": false, 00:12:01.416 "data_offset": 0, 00:12:01.416 "data_size": 63488 00:12:01.416 }, 00:12:01.416 { 00:12:01.416 "name": "BaseBdev3", 00:12:01.416 "uuid": "f36b37c8-e106-4ee8-934b-b469981c8abd", 00:12:01.416 "is_configured": true, 00:12:01.416 "data_offset": 2048, 00:12:01.416 "data_size": 63488 00:12:01.416 }, 00:12:01.416 { 00:12:01.416 "name": "BaseBdev4", 00:12:01.416 "uuid": "810a08b5-229c-4348-8720-f2dc6fdd9ab0", 00:12:01.416 "is_configured": true, 00:12:01.416 "data_offset": 2048, 00:12:01.416 "data_size": 63488 00:12:01.416 } 00:12:01.416 ] 00:12:01.416 }' 00:12:01.416 03:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.416 03:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.676 03:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.676 03:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:01.676 03:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.676 03:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.676 03:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.676 03:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:01.676 03:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:01.676 03:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.676 03:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.676 [2024-10-09 03:14:44.892748] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:01.936 03:14:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.936 03:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:01.936 03:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.936 03:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.936 03:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:01.936 03:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.936 03:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.936 03:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.936 03:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.936 03:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.936 03:14:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.936 03:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.936 03:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:01.936 03:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.936 03:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.936 03:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:01.936 03:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.936 "name": "Existed_Raid", 00:12:01.936 "uuid": "775638bf-13bc-42dd-b7ed-617412ff61e2", 00:12:01.936 "strip_size_kb": 64, 00:12:01.936 "state": "configuring", 00:12:01.936 "raid_level": "concat", 00:12:01.936 "superblock": true, 00:12:01.936 "num_base_bdevs": 4, 00:12:01.936 "num_base_bdevs_discovered": 2, 00:12:01.936 "num_base_bdevs_operational": 4, 00:12:01.936 "base_bdevs_list": [ 00:12:01.936 { 00:12:01.936 "name": null, 00:12:01.936 "uuid": "e2fe77d2-69ef-4b6e-a27d-6d6e283b30ed", 00:12:01.936 "is_configured": false, 00:12:01.936 "data_offset": 0, 00:12:01.936 "data_size": 63488 00:12:01.936 }, 00:12:01.936 { 00:12:01.936 "name": null, 00:12:01.936 "uuid": "c7986d09-d5f3-46d7-8576-3f670d9e498d", 00:12:01.936 "is_configured": false, 00:12:01.936 "data_offset": 0, 00:12:01.936 "data_size": 63488 00:12:01.936 }, 00:12:01.936 { 00:12:01.936 "name": "BaseBdev3", 00:12:01.936 "uuid": "f36b37c8-e106-4ee8-934b-b469981c8abd", 00:12:01.936 "is_configured": true, 00:12:01.936 "data_offset": 2048, 00:12:01.936 "data_size": 63488 00:12:01.936 }, 00:12:01.936 { 00:12:01.936 "name": "BaseBdev4", 00:12:01.936 "uuid": "810a08b5-229c-4348-8720-f2dc6fdd9ab0", 00:12:01.936 "is_configured": true, 00:12:01.936 "data_offset": 2048, 00:12:01.936 "data_size": 63488 00:12:01.936 } 00:12:01.936 ] 00:12:01.936 }' 00:12:01.936 03:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.936 03:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.196 03:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.196 03:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:02.196 03:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.196 03:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.196 03:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.196 03:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:02.196 03:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:02.196 03:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.196 03:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.196 [2024-10-09 03:14:45.426980] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:02.196 03:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.196 03:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:02.196 03:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.196 03:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.196 03:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:02.196 03:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.196 03:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.196 03:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.196 03:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.196 03:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.196 03:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.196 03:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.196 03:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.196 03:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.196 03:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.196 03:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.196 03:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.196 "name": "Existed_Raid", 00:12:02.196 "uuid": "775638bf-13bc-42dd-b7ed-617412ff61e2", 00:12:02.196 "strip_size_kb": 64, 00:12:02.196 "state": "configuring", 00:12:02.196 "raid_level": "concat", 00:12:02.196 "superblock": true, 00:12:02.196 "num_base_bdevs": 4, 00:12:02.196 "num_base_bdevs_discovered": 3, 00:12:02.197 "num_base_bdevs_operational": 4, 00:12:02.197 "base_bdevs_list": [ 00:12:02.197 { 00:12:02.197 "name": null, 00:12:02.197 "uuid": "e2fe77d2-69ef-4b6e-a27d-6d6e283b30ed", 00:12:02.197 "is_configured": false, 00:12:02.197 "data_offset": 0, 00:12:02.197 "data_size": 63488 00:12:02.197 }, 00:12:02.197 { 00:12:02.197 "name": "BaseBdev2", 00:12:02.197 "uuid": "c7986d09-d5f3-46d7-8576-3f670d9e498d", 00:12:02.197 "is_configured": true, 00:12:02.197 "data_offset": 2048, 00:12:02.197 "data_size": 63488 00:12:02.197 }, 00:12:02.197 { 00:12:02.197 "name": "BaseBdev3", 00:12:02.197 "uuid": "f36b37c8-e106-4ee8-934b-b469981c8abd", 00:12:02.197 "is_configured": true, 00:12:02.197 "data_offset": 2048, 00:12:02.197 "data_size": 63488 00:12:02.197 }, 00:12:02.197 { 00:12:02.197 "name": "BaseBdev4", 00:12:02.197 "uuid": "810a08b5-229c-4348-8720-f2dc6fdd9ab0", 00:12:02.197 "is_configured": true, 00:12:02.197 "data_offset": 2048, 00:12:02.197 "data_size": 63488 00:12:02.197 } 00:12:02.197 ] 00:12:02.197 }' 00:12:02.197 03:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.197 03:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.765 03:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.765 03:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.766 03:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.766 03:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:02.766 03:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.766 03:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:02.766 03:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.766 03:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.766 03:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.766 03:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:02.766 03:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.766 03:14:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e2fe77d2-69ef-4b6e-a27d-6d6e283b30ed 00:12:02.766 03:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.766 03:14:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.766 [2024-10-09 03:14:46.022916] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:02.766 NewBaseBdev 00:12:02.766 [2024-10-09 03:14:46.023257] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:02.766 [2024-10-09 03:14:46.023277] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:02.766 [2024-10-09 03:14:46.023568] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:02.766 [2024-10-09 03:14:46.023707] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:02.766 [2024-10-09 03:14:46.023719] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:02.766 [2024-10-09 03:14:46.023866] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:02.766 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.766 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:02.766 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:02.766 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:02.766 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:02.766 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:02.766 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:02.766 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:02.766 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.766 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.766 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.766 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:02.766 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.766 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.766 [ 00:12:02.766 { 00:12:02.766 "name": "NewBaseBdev", 00:12:02.766 "aliases": [ 00:12:02.766 "e2fe77d2-69ef-4b6e-a27d-6d6e283b30ed" 00:12:02.766 ], 00:12:02.766 "product_name": "Malloc disk", 00:12:02.766 "block_size": 512, 00:12:02.766 "num_blocks": 65536, 00:12:02.766 "uuid": "e2fe77d2-69ef-4b6e-a27d-6d6e283b30ed", 00:12:02.766 "assigned_rate_limits": { 00:12:02.766 "rw_ios_per_sec": 0, 00:12:02.766 "rw_mbytes_per_sec": 0, 00:12:02.766 "r_mbytes_per_sec": 0, 00:12:02.766 "w_mbytes_per_sec": 0 00:12:02.766 }, 00:12:02.766 "claimed": true, 00:12:02.766 "claim_type": "exclusive_write", 00:12:02.766 "zoned": false, 00:12:02.766 "supported_io_types": { 00:12:02.766 "read": true, 00:12:02.766 "write": true, 00:12:02.766 "unmap": true, 00:12:02.766 "flush": true, 00:12:02.766 "reset": true, 00:12:02.766 "nvme_admin": false, 00:12:02.766 "nvme_io": false, 00:12:02.766 "nvme_io_md": false, 00:12:02.766 "write_zeroes": true, 00:12:02.766 "zcopy": true, 00:12:02.766 "get_zone_info": false, 00:12:02.766 "zone_management": false, 00:12:02.766 "zone_append": false, 00:12:02.766 "compare": false, 00:12:02.766 "compare_and_write": false, 00:12:02.766 "abort": true, 00:12:02.766 "seek_hole": false, 00:12:02.766 "seek_data": false, 00:12:02.766 "copy": true, 00:12:02.766 "nvme_iov_md": false 00:12:02.766 }, 00:12:02.766 "memory_domains": [ 00:12:02.766 { 00:12:02.766 "dma_device_id": "system", 00:12:02.766 "dma_device_type": 1 00:12:02.766 }, 00:12:02.766 { 00:12:02.766 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.766 "dma_device_type": 2 00:12:02.766 } 00:12:02.766 ], 00:12:02.766 "driver_specific": {} 00:12:02.766 } 00:12:02.766 ] 00:12:02.766 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:02.766 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:02.766 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:02.766 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.766 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:02.766 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:02.766 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.766 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.766 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.766 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.766 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.766 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.766 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.766 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.766 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:02.766 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.025 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.025 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.025 "name": "Existed_Raid", 00:12:03.025 "uuid": "775638bf-13bc-42dd-b7ed-617412ff61e2", 00:12:03.025 "strip_size_kb": 64, 00:12:03.025 "state": "online", 00:12:03.026 "raid_level": "concat", 00:12:03.026 "superblock": true, 00:12:03.026 "num_base_bdevs": 4, 00:12:03.026 "num_base_bdevs_discovered": 4, 00:12:03.026 "num_base_bdevs_operational": 4, 00:12:03.026 "base_bdevs_list": [ 00:12:03.026 { 00:12:03.026 "name": "NewBaseBdev", 00:12:03.026 "uuid": "e2fe77d2-69ef-4b6e-a27d-6d6e283b30ed", 00:12:03.026 "is_configured": true, 00:12:03.026 "data_offset": 2048, 00:12:03.026 "data_size": 63488 00:12:03.026 }, 00:12:03.026 { 00:12:03.026 "name": "BaseBdev2", 00:12:03.026 "uuid": "c7986d09-d5f3-46d7-8576-3f670d9e498d", 00:12:03.026 "is_configured": true, 00:12:03.026 "data_offset": 2048, 00:12:03.026 "data_size": 63488 00:12:03.026 }, 00:12:03.026 { 00:12:03.026 "name": "BaseBdev3", 00:12:03.026 "uuid": "f36b37c8-e106-4ee8-934b-b469981c8abd", 00:12:03.026 "is_configured": true, 00:12:03.026 "data_offset": 2048, 00:12:03.026 "data_size": 63488 00:12:03.026 }, 00:12:03.026 { 00:12:03.026 "name": "BaseBdev4", 00:12:03.026 "uuid": "810a08b5-229c-4348-8720-f2dc6fdd9ab0", 00:12:03.026 "is_configured": true, 00:12:03.026 "data_offset": 2048, 00:12:03.026 "data_size": 63488 00:12:03.026 } 00:12:03.026 ] 00:12:03.026 }' 00:12:03.026 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.026 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.286 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:03.286 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:03.286 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:03.286 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:03.286 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:03.286 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:03.286 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:03.286 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.286 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.286 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:03.286 [2024-10-09 03:14:46.482394] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:03.286 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.286 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:03.286 "name": "Existed_Raid", 00:12:03.286 "aliases": [ 00:12:03.286 "775638bf-13bc-42dd-b7ed-617412ff61e2" 00:12:03.286 ], 00:12:03.286 "product_name": "Raid Volume", 00:12:03.286 "block_size": 512, 00:12:03.286 "num_blocks": 253952, 00:12:03.286 "uuid": "775638bf-13bc-42dd-b7ed-617412ff61e2", 00:12:03.286 "assigned_rate_limits": { 00:12:03.286 "rw_ios_per_sec": 0, 00:12:03.286 "rw_mbytes_per_sec": 0, 00:12:03.286 "r_mbytes_per_sec": 0, 00:12:03.286 "w_mbytes_per_sec": 0 00:12:03.286 }, 00:12:03.286 "claimed": false, 00:12:03.286 "zoned": false, 00:12:03.286 "supported_io_types": { 00:12:03.286 "read": true, 00:12:03.286 "write": true, 00:12:03.286 "unmap": true, 00:12:03.286 "flush": true, 00:12:03.286 "reset": true, 00:12:03.286 "nvme_admin": false, 00:12:03.286 "nvme_io": false, 00:12:03.286 "nvme_io_md": false, 00:12:03.286 "write_zeroes": true, 00:12:03.286 "zcopy": false, 00:12:03.286 "get_zone_info": false, 00:12:03.286 "zone_management": false, 00:12:03.286 "zone_append": false, 00:12:03.286 "compare": false, 00:12:03.286 "compare_and_write": false, 00:12:03.286 "abort": false, 00:12:03.286 "seek_hole": false, 00:12:03.286 "seek_data": false, 00:12:03.286 "copy": false, 00:12:03.286 "nvme_iov_md": false 00:12:03.286 }, 00:12:03.286 "memory_domains": [ 00:12:03.286 { 00:12:03.286 "dma_device_id": "system", 00:12:03.286 "dma_device_type": 1 00:12:03.286 }, 00:12:03.286 { 00:12:03.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.286 "dma_device_type": 2 00:12:03.286 }, 00:12:03.286 { 00:12:03.286 "dma_device_id": "system", 00:12:03.286 "dma_device_type": 1 00:12:03.286 }, 00:12:03.286 { 00:12:03.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.286 "dma_device_type": 2 00:12:03.286 }, 00:12:03.286 { 00:12:03.286 "dma_device_id": "system", 00:12:03.286 "dma_device_type": 1 00:12:03.286 }, 00:12:03.286 { 00:12:03.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.286 "dma_device_type": 2 00:12:03.286 }, 00:12:03.286 { 00:12:03.286 "dma_device_id": "system", 00:12:03.286 "dma_device_type": 1 00:12:03.286 }, 00:12:03.286 { 00:12:03.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.286 "dma_device_type": 2 00:12:03.286 } 00:12:03.286 ], 00:12:03.286 "driver_specific": { 00:12:03.286 "raid": { 00:12:03.286 "uuid": "775638bf-13bc-42dd-b7ed-617412ff61e2", 00:12:03.286 "strip_size_kb": 64, 00:12:03.286 "state": "online", 00:12:03.286 "raid_level": "concat", 00:12:03.286 "superblock": true, 00:12:03.286 "num_base_bdevs": 4, 00:12:03.286 "num_base_bdevs_discovered": 4, 00:12:03.286 "num_base_bdevs_operational": 4, 00:12:03.286 "base_bdevs_list": [ 00:12:03.286 { 00:12:03.286 "name": "NewBaseBdev", 00:12:03.286 "uuid": "e2fe77d2-69ef-4b6e-a27d-6d6e283b30ed", 00:12:03.286 "is_configured": true, 00:12:03.286 "data_offset": 2048, 00:12:03.286 "data_size": 63488 00:12:03.286 }, 00:12:03.286 { 00:12:03.286 "name": "BaseBdev2", 00:12:03.286 "uuid": "c7986d09-d5f3-46d7-8576-3f670d9e498d", 00:12:03.286 "is_configured": true, 00:12:03.286 "data_offset": 2048, 00:12:03.286 "data_size": 63488 00:12:03.286 }, 00:12:03.286 { 00:12:03.286 "name": "BaseBdev3", 00:12:03.286 "uuid": "f36b37c8-e106-4ee8-934b-b469981c8abd", 00:12:03.286 "is_configured": true, 00:12:03.286 "data_offset": 2048, 00:12:03.286 "data_size": 63488 00:12:03.286 }, 00:12:03.286 { 00:12:03.286 "name": "BaseBdev4", 00:12:03.286 "uuid": "810a08b5-229c-4348-8720-f2dc6fdd9ab0", 00:12:03.286 "is_configured": true, 00:12:03.286 "data_offset": 2048, 00:12:03.286 "data_size": 63488 00:12:03.286 } 00:12:03.286 ] 00:12:03.286 } 00:12:03.286 } 00:12:03.286 }' 00:12:03.286 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:03.286 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:03.286 BaseBdev2 00:12:03.286 BaseBdev3 00:12:03.286 BaseBdev4' 00:12:03.286 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:03.546 [2024-10-09 03:14:46.785697] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:03.546 [2024-10-09 03:14:46.785765] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:03.546 [2024-10-09 03:14:46.785860] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:03.546 [2024-10-09 03:14:46.785945] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:03.546 [2024-10-09 03:14:46.785986] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 72119 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 72119 ']' 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 72119 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72119 00:12:03.546 killing process with pid 72119 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72119' 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 72119 00:12:03.546 [2024-10-09 03:14:46.832439] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:03.546 03:14:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 72119 00:12:04.114 [2024-10-09 03:14:47.245281] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:05.493 03:14:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:05.493 00:12:05.493 real 0m11.892s 00:12:05.493 user 0m18.504s 00:12:05.493 sys 0m2.267s 00:12:05.493 03:14:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:05.493 ************************************ 00:12:05.493 END TEST raid_state_function_test_sb 00:12:05.493 ************************************ 00:12:05.493 03:14:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:05.493 03:14:48 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:12:05.493 03:14:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:05.493 03:14:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:05.493 03:14:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:05.493 ************************************ 00:12:05.493 START TEST raid_superblock_test 00:12:05.493 ************************************ 00:12:05.493 03:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test concat 4 00:12:05.493 03:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:12:05.493 03:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:05.493 03:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:05.493 03:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:05.493 03:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:05.493 03:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:05.493 03:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:05.493 03:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:05.493 03:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:05.493 03:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:05.493 03:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:05.493 03:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:05.493 03:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:05.493 03:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:12:05.493 03:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:05.493 03:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:05.493 03:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72789 00:12:05.493 03:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:05.493 03:14:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72789 00:12:05.493 03:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 72789 ']' 00:12:05.493 03:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:05.493 03:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:05.493 03:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:05.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:05.493 03:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:05.493 03:14:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.493 [2024-10-09 03:14:48.726933] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:12:05.493 [2024-10-09 03:14:48.727111] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72789 ] 00:12:05.752 [2024-10-09 03:14:48.890994] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:06.011 [2024-10-09 03:14:49.134351] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.270 [2024-10-09 03:14:49.379605] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:06.270 [2024-10-09 03:14:49.379747] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:06.271 03:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:06.271 03:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:12:06.271 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:06.271 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:06.271 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:06.271 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:06.271 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:06.271 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:06.271 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:06.271 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:06.271 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:06.271 03:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.271 03:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.531 malloc1 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.531 [2024-10-09 03:14:49.608982] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:06.531 [2024-10-09 03:14:49.609133] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.531 [2024-10-09 03:14:49.609177] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:06.531 [2024-10-09 03:14:49.609210] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.531 [2024-10-09 03:14:49.611524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.531 [2024-10-09 03:14:49.611598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:06.531 pt1 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.531 malloc2 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.531 [2024-10-09 03:14:49.699163] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:06.531 [2024-10-09 03:14:49.699267] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.531 [2024-10-09 03:14:49.699310] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:06.531 [2024-10-09 03:14:49.699337] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.531 [2024-10-09 03:14:49.701630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.531 [2024-10-09 03:14:49.701699] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:06.531 pt2 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.531 malloc3 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.531 [2024-10-09 03:14:49.760911] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:06.531 [2024-10-09 03:14:49.761004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.531 [2024-10-09 03:14:49.761043] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:06.531 [2024-10-09 03:14:49.761072] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.531 [2024-10-09 03:14:49.763418] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.531 [2024-10-09 03:14:49.763488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:06.531 pt3 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.531 malloc4 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.531 [2024-10-09 03:14:49.822105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:06.531 [2024-10-09 03:14:49.822218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:06.531 [2024-10-09 03:14:49.822254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:06.531 [2024-10-09 03:14:49.822280] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:06.531 [2024-10-09 03:14:49.824584] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:06.531 [2024-10-09 03:14:49.824656] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:06.531 pt4 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.531 03:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.791 [2024-10-09 03:14:49.834145] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:06.791 [2024-10-09 03:14:49.836179] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:06.791 [2024-10-09 03:14:49.836282] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:06.791 [2024-10-09 03:14:49.836363] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:06.791 [2024-10-09 03:14:49.836591] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:06.791 [2024-10-09 03:14:49.836642] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:06.791 [2024-10-09 03:14:49.836928] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:06.791 [2024-10-09 03:14:49.837130] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:06.791 [2024-10-09 03:14:49.837180] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:06.791 [2024-10-09 03:14:49.837369] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:06.791 03:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.791 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:06.791 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:06.791 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:06.791 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:06.791 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:06.791 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:06.791 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:06.791 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:06.791 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:06.791 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:06.791 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:06.791 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:06.791 03:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:06.791 03:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.791 03:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:06.791 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:06.791 "name": "raid_bdev1", 00:12:06.791 "uuid": "5b43edf0-0310-4a90-867d-cefe40e40cfb", 00:12:06.791 "strip_size_kb": 64, 00:12:06.791 "state": "online", 00:12:06.791 "raid_level": "concat", 00:12:06.791 "superblock": true, 00:12:06.791 "num_base_bdevs": 4, 00:12:06.791 "num_base_bdevs_discovered": 4, 00:12:06.791 "num_base_bdevs_operational": 4, 00:12:06.791 "base_bdevs_list": [ 00:12:06.791 { 00:12:06.791 "name": "pt1", 00:12:06.791 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:06.792 "is_configured": true, 00:12:06.792 "data_offset": 2048, 00:12:06.792 "data_size": 63488 00:12:06.792 }, 00:12:06.792 { 00:12:06.792 "name": "pt2", 00:12:06.792 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:06.792 "is_configured": true, 00:12:06.792 "data_offset": 2048, 00:12:06.792 "data_size": 63488 00:12:06.792 }, 00:12:06.792 { 00:12:06.792 "name": "pt3", 00:12:06.792 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:06.792 "is_configured": true, 00:12:06.792 "data_offset": 2048, 00:12:06.792 "data_size": 63488 00:12:06.792 }, 00:12:06.792 { 00:12:06.792 "name": "pt4", 00:12:06.792 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:06.792 "is_configured": true, 00:12:06.792 "data_offset": 2048, 00:12:06.792 "data_size": 63488 00:12:06.792 } 00:12:06.792 ] 00:12:06.792 }' 00:12:06.792 03:14:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:06.792 03:14:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.051 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:07.051 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:07.051 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:07.051 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:07.051 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:07.051 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:07.051 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:07.051 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:07.051 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.051 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.051 [2024-10-09 03:14:50.317655] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:07.051 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.051 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:07.051 "name": "raid_bdev1", 00:12:07.051 "aliases": [ 00:12:07.051 "5b43edf0-0310-4a90-867d-cefe40e40cfb" 00:12:07.051 ], 00:12:07.051 "product_name": "Raid Volume", 00:12:07.051 "block_size": 512, 00:12:07.051 "num_blocks": 253952, 00:12:07.051 "uuid": "5b43edf0-0310-4a90-867d-cefe40e40cfb", 00:12:07.051 "assigned_rate_limits": { 00:12:07.051 "rw_ios_per_sec": 0, 00:12:07.051 "rw_mbytes_per_sec": 0, 00:12:07.051 "r_mbytes_per_sec": 0, 00:12:07.051 "w_mbytes_per_sec": 0 00:12:07.051 }, 00:12:07.051 "claimed": false, 00:12:07.051 "zoned": false, 00:12:07.051 "supported_io_types": { 00:12:07.051 "read": true, 00:12:07.051 "write": true, 00:12:07.051 "unmap": true, 00:12:07.051 "flush": true, 00:12:07.051 "reset": true, 00:12:07.051 "nvme_admin": false, 00:12:07.051 "nvme_io": false, 00:12:07.051 "nvme_io_md": false, 00:12:07.051 "write_zeroes": true, 00:12:07.051 "zcopy": false, 00:12:07.051 "get_zone_info": false, 00:12:07.051 "zone_management": false, 00:12:07.051 "zone_append": false, 00:12:07.051 "compare": false, 00:12:07.051 "compare_and_write": false, 00:12:07.051 "abort": false, 00:12:07.051 "seek_hole": false, 00:12:07.051 "seek_data": false, 00:12:07.051 "copy": false, 00:12:07.051 "nvme_iov_md": false 00:12:07.051 }, 00:12:07.051 "memory_domains": [ 00:12:07.051 { 00:12:07.051 "dma_device_id": "system", 00:12:07.051 "dma_device_type": 1 00:12:07.051 }, 00:12:07.051 { 00:12:07.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.051 "dma_device_type": 2 00:12:07.051 }, 00:12:07.051 { 00:12:07.051 "dma_device_id": "system", 00:12:07.051 "dma_device_type": 1 00:12:07.051 }, 00:12:07.051 { 00:12:07.051 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.051 "dma_device_type": 2 00:12:07.051 }, 00:12:07.051 { 00:12:07.051 "dma_device_id": "system", 00:12:07.052 "dma_device_type": 1 00:12:07.052 }, 00:12:07.052 { 00:12:07.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.052 "dma_device_type": 2 00:12:07.052 }, 00:12:07.052 { 00:12:07.052 "dma_device_id": "system", 00:12:07.052 "dma_device_type": 1 00:12:07.052 }, 00:12:07.052 { 00:12:07.052 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.052 "dma_device_type": 2 00:12:07.052 } 00:12:07.052 ], 00:12:07.052 "driver_specific": { 00:12:07.052 "raid": { 00:12:07.052 "uuid": "5b43edf0-0310-4a90-867d-cefe40e40cfb", 00:12:07.052 "strip_size_kb": 64, 00:12:07.052 "state": "online", 00:12:07.052 "raid_level": "concat", 00:12:07.052 "superblock": true, 00:12:07.052 "num_base_bdevs": 4, 00:12:07.052 "num_base_bdevs_discovered": 4, 00:12:07.052 "num_base_bdevs_operational": 4, 00:12:07.052 "base_bdevs_list": [ 00:12:07.052 { 00:12:07.052 "name": "pt1", 00:12:07.052 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:07.052 "is_configured": true, 00:12:07.052 "data_offset": 2048, 00:12:07.052 "data_size": 63488 00:12:07.052 }, 00:12:07.052 { 00:12:07.052 "name": "pt2", 00:12:07.052 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:07.052 "is_configured": true, 00:12:07.052 "data_offset": 2048, 00:12:07.052 "data_size": 63488 00:12:07.052 }, 00:12:07.052 { 00:12:07.052 "name": "pt3", 00:12:07.052 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:07.052 "is_configured": true, 00:12:07.052 "data_offset": 2048, 00:12:07.052 "data_size": 63488 00:12:07.052 }, 00:12:07.052 { 00:12:07.052 "name": "pt4", 00:12:07.052 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:07.052 "is_configured": true, 00:12:07.052 "data_offset": 2048, 00:12:07.052 "data_size": 63488 00:12:07.052 } 00:12:07.052 ] 00:12:07.052 } 00:12:07.052 } 00:12:07.052 }' 00:12:07.312 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:07.312 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:07.312 pt2 00:12:07.312 pt3 00:12:07.312 pt4' 00:12:07.312 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.312 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:07.312 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:07.312 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:07.312 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.312 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.312 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.312 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.312 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:07.312 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:07.312 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:07.312 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.312 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:07.312 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.312 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.312 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.312 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:07.312 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:07.312 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:07.312 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:07.312 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.312 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.312 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.312 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.312 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:07.312 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:07.312 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:07.312 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:07.312 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.312 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.312 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:07.312 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.571 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:07.571 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:07.571 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:07.571 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.571 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.571 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:07.571 [2024-10-09 03:14:50.629222] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:07.571 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.571 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5b43edf0-0310-4a90-867d-cefe40e40cfb 00:12:07.571 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 5b43edf0-0310-4a90-867d-cefe40e40cfb ']' 00:12:07.571 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:07.571 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.571 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.571 [2024-10-09 03:14:50.676964] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:07.571 [2024-10-09 03:14:50.677049] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:07.571 [2024-10-09 03:14:50.677151] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:07.571 [2024-10-09 03:14:50.677243] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:07.571 [2024-10-09 03:14:50.677341] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:07.571 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.571 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.571 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.571 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.571 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.572 [2024-10-09 03:14:50.836830] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:07.572 [2024-10-09 03:14:50.838973] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:07.572 [2024-10-09 03:14:50.839058] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:07.572 [2024-10-09 03:14:50.839107] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:07.572 [2024-10-09 03:14:50.839191] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:07.572 [2024-10-09 03:14:50.839263] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:07.572 [2024-10-09 03:14:50.839342] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:07.572 [2024-10-09 03:14:50.839410] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:07.572 [2024-10-09 03:14:50.839480] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:07.572 [2024-10-09 03:14:50.839518] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:07.572 request: 00:12:07.572 { 00:12:07.572 "name": "raid_bdev1", 00:12:07.572 "raid_level": "concat", 00:12:07.572 "base_bdevs": [ 00:12:07.572 "malloc1", 00:12:07.572 "malloc2", 00:12:07.572 "malloc3", 00:12:07.572 "malloc4" 00:12:07.572 ], 00:12:07.572 "strip_size_kb": 64, 00:12:07.572 "superblock": false, 00:12:07.572 "method": "bdev_raid_create", 00:12:07.572 "req_id": 1 00:12:07.572 } 00:12:07.572 Got JSON-RPC error response 00:12:07.572 response: 00:12:07.572 { 00:12:07.572 "code": -17, 00:12:07.572 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:07.572 } 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.572 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.832 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:07.832 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:07.832 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:07.832 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.832 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.832 [2024-10-09 03:14:50.892691] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:07.832 [2024-10-09 03:14:50.892772] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:07.832 [2024-10-09 03:14:50.892804] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:07.832 [2024-10-09 03:14:50.892836] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:07.832 [2024-10-09 03:14:50.895166] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:07.832 [2024-10-09 03:14:50.895234] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:07.832 [2024-10-09 03:14:50.895325] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:07.832 [2024-10-09 03:14:50.895398] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:07.832 pt1 00:12:07.832 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.832 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:07.832 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:07.832 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:07.832 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:07.832 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:07.832 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:07.832 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:07.832 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:07.832 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:07.832 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:07.832 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:07.832 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:07.832 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:07.832 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:07.832 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:07.832 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:07.832 "name": "raid_bdev1", 00:12:07.832 "uuid": "5b43edf0-0310-4a90-867d-cefe40e40cfb", 00:12:07.832 "strip_size_kb": 64, 00:12:07.832 "state": "configuring", 00:12:07.832 "raid_level": "concat", 00:12:07.832 "superblock": true, 00:12:07.832 "num_base_bdevs": 4, 00:12:07.832 "num_base_bdevs_discovered": 1, 00:12:07.832 "num_base_bdevs_operational": 4, 00:12:07.832 "base_bdevs_list": [ 00:12:07.832 { 00:12:07.832 "name": "pt1", 00:12:07.832 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:07.832 "is_configured": true, 00:12:07.832 "data_offset": 2048, 00:12:07.832 "data_size": 63488 00:12:07.832 }, 00:12:07.832 { 00:12:07.832 "name": null, 00:12:07.832 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:07.832 "is_configured": false, 00:12:07.832 "data_offset": 2048, 00:12:07.832 "data_size": 63488 00:12:07.832 }, 00:12:07.832 { 00:12:07.832 "name": null, 00:12:07.832 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:07.832 "is_configured": false, 00:12:07.832 "data_offset": 2048, 00:12:07.832 "data_size": 63488 00:12:07.832 }, 00:12:07.832 { 00:12:07.832 "name": null, 00:12:07.832 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:07.832 "is_configured": false, 00:12:07.832 "data_offset": 2048, 00:12:07.832 "data_size": 63488 00:12:07.832 } 00:12:07.832 ] 00:12:07.832 }' 00:12:07.832 03:14:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:07.832 03:14:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.092 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:08.092 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:08.092 03:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.092 03:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.092 [2024-10-09 03:14:51.347929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:08.092 [2024-10-09 03:14:51.348019] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.092 [2024-10-09 03:14:51.348052] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:08.092 [2024-10-09 03:14:51.348079] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.092 [2024-10-09 03:14:51.348474] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.092 [2024-10-09 03:14:51.348529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:08.092 [2024-10-09 03:14:51.348612] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:08.092 [2024-10-09 03:14:51.348659] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:08.092 pt2 00:12:08.092 03:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.092 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:08.092 03:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.092 03:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.092 [2024-10-09 03:14:51.359949] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:08.092 03:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.092 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:12:08.092 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.092 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.092 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:08.092 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.092 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.092 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.092 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.092 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.092 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.092 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.092 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.092 03:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.092 03:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.092 03:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.351 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.351 "name": "raid_bdev1", 00:12:08.351 "uuid": "5b43edf0-0310-4a90-867d-cefe40e40cfb", 00:12:08.351 "strip_size_kb": 64, 00:12:08.351 "state": "configuring", 00:12:08.351 "raid_level": "concat", 00:12:08.351 "superblock": true, 00:12:08.351 "num_base_bdevs": 4, 00:12:08.351 "num_base_bdevs_discovered": 1, 00:12:08.351 "num_base_bdevs_operational": 4, 00:12:08.351 "base_bdevs_list": [ 00:12:08.351 { 00:12:08.351 "name": "pt1", 00:12:08.351 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:08.351 "is_configured": true, 00:12:08.351 "data_offset": 2048, 00:12:08.351 "data_size": 63488 00:12:08.351 }, 00:12:08.351 { 00:12:08.351 "name": null, 00:12:08.351 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:08.351 "is_configured": false, 00:12:08.351 "data_offset": 0, 00:12:08.351 "data_size": 63488 00:12:08.351 }, 00:12:08.351 { 00:12:08.351 "name": null, 00:12:08.351 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:08.351 "is_configured": false, 00:12:08.351 "data_offset": 2048, 00:12:08.351 "data_size": 63488 00:12:08.351 }, 00:12:08.351 { 00:12:08.351 "name": null, 00:12:08.351 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:08.351 "is_configured": false, 00:12:08.351 "data_offset": 2048, 00:12:08.351 "data_size": 63488 00:12:08.351 } 00:12:08.351 ] 00:12:08.351 }' 00:12:08.351 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.351 03:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.610 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:08.610 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:08.610 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:08.610 03:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.610 03:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.610 [2024-10-09 03:14:51.803155] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:08.610 [2024-10-09 03:14:51.803247] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.610 [2024-10-09 03:14:51.803284] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:08.610 [2024-10-09 03:14:51.803311] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.610 [2024-10-09 03:14:51.803793] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.610 [2024-10-09 03:14:51.803861] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:08.610 [2024-10-09 03:14:51.803982] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:08.610 [2024-10-09 03:14:51.804031] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:08.610 pt2 00:12:08.610 03:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.610 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:08.610 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:08.610 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:08.610 03:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.610 03:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.610 [2024-10-09 03:14:51.815134] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:08.610 [2024-10-09 03:14:51.815217] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.610 [2024-10-09 03:14:51.815259] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:08.610 [2024-10-09 03:14:51.815287] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.611 [2024-10-09 03:14:51.815652] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.611 [2024-10-09 03:14:51.815702] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:08.611 [2024-10-09 03:14:51.815786] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:08.611 [2024-10-09 03:14:51.815827] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:08.611 pt3 00:12:08.611 03:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.611 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:08.611 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:08.611 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:08.611 03:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.611 03:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.611 [2024-10-09 03:14:51.827094] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:08.611 [2024-10-09 03:14:51.827173] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:08.611 [2024-10-09 03:14:51.827209] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:08.611 [2024-10-09 03:14:51.827236] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:08.611 [2024-10-09 03:14:51.827610] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:08.611 [2024-10-09 03:14:51.827662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:08.611 [2024-10-09 03:14:51.827748] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:08.611 [2024-10-09 03:14:51.827791] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:08.611 [2024-10-09 03:14:51.827953] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:08.611 [2024-10-09 03:14:51.827992] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:08.611 [2024-10-09 03:14:51.828278] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:08.611 [2024-10-09 03:14:51.828461] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:08.611 [2024-10-09 03:14:51.828509] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:08.611 [2024-10-09 03:14:51.828678] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:08.611 pt4 00:12:08.611 03:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.611 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:08.611 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:08.611 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:08.611 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:08.611 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:08.611 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:08.611 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.611 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.611 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.611 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.611 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.611 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.611 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.611 03:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:08.611 03:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.611 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:08.611 03:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:08.611 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:08.611 "name": "raid_bdev1", 00:12:08.611 "uuid": "5b43edf0-0310-4a90-867d-cefe40e40cfb", 00:12:08.611 "strip_size_kb": 64, 00:12:08.611 "state": "online", 00:12:08.611 "raid_level": "concat", 00:12:08.611 "superblock": true, 00:12:08.611 "num_base_bdevs": 4, 00:12:08.611 "num_base_bdevs_discovered": 4, 00:12:08.611 "num_base_bdevs_operational": 4, 00:12:08.611 "base_bdevs_list": [ 00:12:08.611 { 00:12:08.611 "name": "pt1", 00:12:08.611 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:08.611 "is_configured": true, 00:12:08.611 "data_offset": 2048, 00:12:08.611 "data_size": 63488 00:12:08.611 }, 00:12:08.611 { 00:12:08.611 "name": "pt2", 00:12:08.611 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:08.611 "is_configured": true, 00:12:08.611 "data_offset": 2048, 00:12:08.611 "data_size": 63488 00:12:08.611 }, 00:12:08.611 { 00:12:08.611 "name": "pt3", 00:12:08.611 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:08.611 "is_configured": true, 00:12:08.611 "data_offset": 2048, 00:12:08.611 "data_size": 63488 00:12:08.611 }, 00:12:08.611 { 00:12:08.611 "name": "pt4", 00:12:08.611 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:08.611 "is_configured": true, 00:12:08.611 "data_offset": 2048, 00:12:08.611 "data_size": 63488 00:12:08.611 } 00:12:08.611 ] 00:12:08.611 }' 00:12:08.611 03:14:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:08.611 03:14:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.179 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:09.179 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:09.179 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:09.179 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:09.179 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:09.179 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:09.179 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:09.179 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:09.179 03:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.179 03:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.179 [2024-10-09 03:14:52.314554] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:09.179 03:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.179 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:09.179 "name": "raid_bdev1", 00:12:09.179 "aliases": [ 00:12:09.179 "5b43edf0-0310-4a90-867d-cefe40e40cfb" 00:12:09.179 ], 00:12:09.179 "product_name": "Raid Volume", 00:12:09.179 "block_size": 512, 00:12:09.179 "num_blocks": 253952, 00:12:09.179 "uuid": "5b43edf0-0310-4a90-867d-cefe40e40cfb", 00:12:09.179 "assigned_rate_limits": { 00:12:09.179 "rw_ios_per_sec": 0, 00:12:09.179 "rw_mbytes_per_sec": 0, 00:12:09.179 "r_mbytes_per_sec": 0, 00:12:09.179 "w_mbytes_per_sec": 0 00:12:09.179 }, 00:12:09.179 "claimed": false, 00:12:09.179 "zoned": false, 00:12:09.179 "supported_io_types": { 00:12:09.179 "read": true, 00:12:09.179 "write": true, 00:12:09.179 "unmap": true, 00:12:09.179 "flush": true, 00:12:09.179 "reset": true, 00:12:09.179 "nvme_admin": false, 00:12:09.179 "nvme_io": false, 00:12:09.179 "nvme_io_md": false, 00:12:09.179 "write_zeroes": true, 00:12:09.179 "zcopy": false, 00:12:09.179 "get_zone_info": false, 00:12:09.179 "zone_management": false, 00:12:09.179 "zone_append": false, 00:12:09.179 "compare": false, 00:12:09.179 "compare_and_write": false, 00:12:09.179 "abort": false, 00:12:09.179 "seek_hole": false, 00:12:09.179 "seek_data": false, 00:12:09.179 "copy": false, 00:12:09.179 "nvme_iov_md": false 00:12:09.179 }, 00:12:09.179 "memory_domains": [ 00:12:09.179 { 00:12:09.179 "dma_device_id": "system", 00:12:09.179 "dma_device_type": 1 00:12:09.179 }, 00:12:09.179 { 00:12:09.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.179 "dma_device_type": 2 00:12:09.179 }, 00:12:09.179 { 00:12:09.179 "dma_device_id": "system", 00:12:09.179 "dma_device_type": 1 00:12:09.179 }, 00:12:09.179 { 00:12:09.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.179 "dma_device_type": 2 00:12:09.179 }, 00:12:09.179 { 00:12:09.179 "dma_device_id": "system", 00:12:09.179 "dma_device_type": 1 00:12:09.179 }, 00:12:09.179 { 00:12:09.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.179 "dma_device_type": 2 00:12:09.179 }, 00:12:09.179 { 00:12:09.179 "dma_device_id": "system", 00:12:09.179 "dma_device_type": 1 00:12:09.179 }, 00:12:09.179 { 00:12:09.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.179 "dma_device_type": 2 00:12:09.179 } 00:12:09.179 ], 00:12:09.179 "driver_specific": { 00:12:09.179 "raid": { 00:12:09.179 "uuid": "5b43edf0-0310-4a90-867d-cefe40e40cfb", 00:12:09.179 "strip_size_kb": 64, 00:12:09.179 "state": "online", 00:12:09.179 "raid_level": "concat", 00:12:09.179 "superblock": true, 00:12:09.179 "num_base_bdevs": 4, 00:12:09.179 "num_base_bdevs_discovered": 4, 00:12:09.179 "num_base_bdevs_operational": 4, 00:12:09.179 "base_bdevs_list": [ 00:12:09.179 { 00:12:09.179 "name": "pt1", 00:12:09.179 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:09.179 "is_configured": true, 00:12:09.179 "data_offset": 2048, 00:12:09.180 "data_size": 63488 00:12:09.180 }, 00:12:09.180 { 00:12:09.180 "name": "pt2", 00:12:09.180 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:09.180 "is_configured": true, 00:12:09.180 "data_offset": 2048, 00:12:09.180 "data_size": 63488 00:12:09.180 }, 00:12:09.180 { 00:12:09.180 "name": "pt3", 00:12:09.180 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:09.180 "is_configured": true, 00:12:09.180 "data_offset": 2048, 00:12:09.180 "data_size": 63488 00:12:09.180 }, 00:12:09.180 { 00:12:09.180 "name": "pt4", 00:12:09.180 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:09.180 "is_configured": true, 00:12:09.180 "data_offset": 2048, 00:12:09.180 "data_size": 63488 00:12:09.180 } 00:12:09.180 ] 00:12:09.180 } 00:12:09.180 } 00:12:09.180 }' 00:12:09.180 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:09.180 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:09.180 pt2 00:12:09.180 pt3 00:12:09.180 pt4' 00:12:09.180 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.180 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:09.180 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.180 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:09.180 03:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.180 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.180 03:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.180 03:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.180 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:09.180 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:09.180 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.180 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:09.180 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.180 03:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.180 03:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.180 03:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:09.439 [2024-10-09 03:14:52.610094] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 5b43edf0-0310-4a90-867d-cefe40e40cfb '!=' 5b43edf0-0310-4a90-867d-cefe40e40cfb ']' 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72789 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 72789 ']' 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 72789 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 72789 00:12:09.439 killing process with pid 72789 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 72789' 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 72789 00:12:09.439 [2024-10-09 03:14:52.683311] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:09.439 [2024-10-09 03:14:52.683414] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:09.439 03:14:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 72789 00:12:09.439 [2024-10-09 03:14:52.683501] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:09.439 [2024-10-09 03:14:52.683511] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:10.007 [2024-10-09 03:14:53.133405] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:11.385 03:14:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:11.385 00:12:11.385 real 0m6.001s 00:12:11.385 user 0m8.251s 00:12:11.385 sys 0m1.051s 00:12:11.385 03:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:11.385 ************************************ 00:12:11.385 END TEST raid_superblock_test 00:12:11.385 ************************************ 00:12:11.385 03:14:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.647 03:14:54 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:12:11.647 03:14:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:11.647 03:14:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:11.647 03:14:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:11.647 ************************************ 00:12:11.647 START TEST raid_read_error_test 00:12:11.647 ************************************ 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 read 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.tFOoNy6wXW 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73055 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73055 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 73055 ']' 00:12:11.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:11.647 03:14:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:11.647 [2024-10-09 03:14:54.826543] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:12:11.647 [2024-10-09 03:14:54.826646] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73055 ] 00:12:11.925 [2024-10-09 03:14:54.988270] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:12.199 [2024-10-09 03:14:55.231192] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:12.199 [2024-10-09 03:14:55.465669] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.199 [2024-10-09 03:14:55.465717] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:12.458 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:12.458 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:12.458 03:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:12.458 03:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:12.458 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.458 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.458 BaseBdev1_malloc 00:12:12.458 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.458 03:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:12.458 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.458 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.458 true 00:12:12.458 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.458 03:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:12.458 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.458 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.458 [2024-10-09 03:14:55.709506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:12.458 [2024-10-09 03:14:55.709615] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.458 [2024-10-09 03:14:55.709652] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:12.458 [2024-10-09 03:14:55.709687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.458 [2024-10-09 03:14:55.712083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.458 [2024-10-09 03:14:55.712157] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:12.458 BaseBdev1 00:12:12.458 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.458 03:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:12.458 03:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:12.458 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.458 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.719 BaseBdev2_malloc 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.719 true 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.719 [2024-10-09 03:14:55.799326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:12.719 [2024-10-09 03:14:55.799427] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.719 [2024-10-09 03:14:55.799464] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:12.719 [2024-10-09 03:14:55.799499] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.719 [2024-10-09 03:14:55.801883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.719 [2024-10-09 03:14:55.801955] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:12.719 BaseBdev2 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.719 BaseBdev3_malloc 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.719 true 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.719 [2024-10-09 03:14:55.869987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:12.719 [2024-10-09 03:14:55.870094] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.719 [2024-10-09 03:14:55.870132] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:12.719 [2024-10-09 03:14:55.870163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.719 [2024-10-09 03:14:55.872586] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.719 [2024-10-09 03:14:55.872662] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:12.719 BaseBdev3 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.719 BaseBdev4_malloc 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.719 true 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.719 [2024-10-09 03:14:55.947104] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:12.719 [2024-10-09 03:14:55.947234] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:12.719 [2024-10-09 03:14:55.947273] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:12.719 [2024-10-09 03:14:55.947324] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:12.719 [2024-10-09 03:14:55.950085] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:12.719 [2024-10-09 03:14:55.950180] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:12.719 BaseBdev4 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.719 [2024-10-09 03:14:55.955170] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:12.719 [2024-10-09 03:14:55.957577] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:12.719 [2024-10-09 03:14:55.957710] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:12.719 [2024-10-09 03:14:55.957788] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:12.719 [2024-10-09 03:14:55.958040] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:12.719 [2024-10-09 03:14:55.958057] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:12.719 [2024-10-09 03:14:55.958349] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:12.719 [2024-10-09 03:14:55.958532] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:12.719 [2024-10-09 03:14:55.958543] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:12.719 [2024-10-09 03:14:55.958727] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:12.719 03:14:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:12.719 03:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:12.719 "name": "raid_bdev1", 00:12:12.719 "uuid": "9e07627d-2e92-408c-9164-c9a69aece7bf", 00:12:12.719 "strip_size_kb": 64, 00:12:12.719 "state": "online", 00:12:12.719 "raid_level": "concat", 00:12:12.719 "superblock": true, 00:12:12.719 "num_base_bdevs": 4, 00:12:12.719 "num_base_bdevs_discovered": 4, 00:12:12.719 "num_base_bdevs_operational": 4, 00:12:12.719 "base_bdevs_list": [ 00:12:12.719 { 00:12:12.719 "name": "BaseBdev1", 00:12:12.719 "uuid": "c163a34a-bd98-567b-b1f0-8792f6838a45", 00:12:12.719 "is_configured": true, 00:12:12.719 "data_offset": 2048, 00:12:12.719 "data_size": 63488 00:12:12.719 }, 00:12:12.719 { 00:12:12.719 "name": "BaseBdev2", 00:12:12.719 "uuid": "0d94041c-d501-5abb-9736-a7bcd554f5e2", 00:12:12.719 "is_configured": true, 00:12:12.719 "data_offset": 2048, 00:12:12.719 "data_size": 63488 00:12:12.719 }, 00:12:12.720 { 00:12:12.720 "name": "BaseBdev3", 00:12:12.720 "uuid": "7c062b77-442f-59c6-8694-d7105efb10e1", 00:12:12.720 "is_configured": true, 00:12:12.720 "data_offset": 2048, 00:12:12.720 "data_size": 63488 00:12:12.720 }, 00:12:12.720 { 00:12:12.720 "name": "BaseBdev4", 00:12:12.720 "uuid": "96d97af4-6e9b-5c8c-9be8-387d5a2d1000", 00:12:12.720 "is_configured": true, 00:12:12.720 "data_offset": 2048, 00:12:12.720 "data_size": 63488 00:12:12.720 } 00:12:12.720 ] 00:12:12.720 }' 00:12:12.720 03:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:12.720 03:14:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:13.288 03:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:13.288 03:14:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:13.288 [2024-10-09 03:14:56.451792] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:14.227 03:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:14.227 03:14:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.227 03:14:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.227 03:14:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.227 03:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:14.227 03:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:14.227 03:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:14.227 03:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:14.227 03:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:14.227 03:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:14.227 03:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:14.227 03:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:14.227 03:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.227 03:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.227 03:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.227 03:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.227 03:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.227 03:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.227 03:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:14.227 03:14:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.227 03:14:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.227 03:14:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.227 03:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.227 "name": "raid_bdev1", 00:12:14.227 "uuid": "9e07627d-2e92-408c-9164-c9a69aece7bf", 00:12:14.227 "strip_size_kb": 64, 00:12:14.227 "state": "online", 00:12:14.227 "raid_level": "concat", 00:12:14.227 "superblock": true, 00:12:14.227 "num_base_bdevs": 4, 00:12:14.227 "num_base_bdevs_discovered": 4, 00:12:14.227 "num_base_bdevs_operational": 4, 00:12:14.227 "base_bdevs_list": [ 00:12:14.227 { 00:12:14.227 "name": "BaseBdev1", 00:12:14.227 "uuid": "c163a34a-bd98-567b-b1f0-8792f6838a45", 00:12:14.227 "is_configured": true, 00:12:14.227 "data_offset": 2048, 00:12:14.227 "data_size": 63488 00:12:14.227 }, 00:12:14.227 { 00:12:14.227 "name": "BaseBdev2", 00:12:14.227 "uuid": "0d94041c-d501-5abb-9736-a7bcd554f5e2", 00:12:14.227 "is_configured": true, 00:12:14.227 "data_offset": 2048, 00:12:14.227 "data_size": 63488 00:12:14.227 }, 00:12:14.227 { 00:12:14.227 "name": "BaseBdev3", 00:12:14.227 "uuid": "7c062b77-442f-59c6-8694-d7105efb10e1", 00:12:14.227 "is_configured": true, 00:12:14.227 "data_offset": 2048, 00:12:14.227 "data_size": 63488 00:12:14.227 }, 00:12:14.227 { 00:12:14.227 "name": "BaseBdev4", 00:12:14.227 "uuid": "96d97af4-6e9b-5c8c-9be8-387d5a2d1000", 00:12:14.227 "is_configured": true, 00:12:14.227 "data_offset": 2048, 00:12:14.227 "data_size": 63488 00:12:14.227 } 00:12:14.227 ] 00:12:14.227 }' 00:12:14.227 03:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.227 03:14:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.796 03:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:14.796 03:14:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:14.796 03:14:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:14.796 [2024-10-09 03:14:57.821945] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:14.796 [2024-10-09 03:14:57.822042] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:14.796 [2024-10-09 03:14:57.824892] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:14.796 [2024-10-09 03:14:57.825006] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:14.796 [2024-10-09 03:14:57.825079] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:14.796 [2024-10-09 03:14:57.825136] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:14.796 { 00:12:14.796 "results": [ 00:12:14.796 { 00:12:14.796 "job": "raid_bdev1", 00:12:14.796 "core_mask": "0x1", 00:12:14.796 "workload": "randrw", 00:12:14.796 "percentage": 50, 00:12:14.796 "status": "finished", 00:12:14.796 "queue_depth": 1, 00:12:14.796 "io_size": 131072, 00:12:14.796 "runtime": 1.37058, 00:12:14.796 "iops": 12825.227276043719, 00:12:14.796 "mibps": 1603.1534095054649, 00:12:14.796 "io_failed": 1, 00:12:14.796 "io_timeout": 0, 00:12:14.796 "avg_latency_us": 109.61370770155239, 00:12:14.796 "min_latency_us": 29.065502183406114, 00:12:14.796 "max_latency_us": 1502.46288209607 00:12:14.796 } 00:12:14.796 ], 00:12:14.796 "core_count": 1 00:12:14.796 } 00:12:14.796 03:14:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:14.796 03:14:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73055 00:12:14.796 03:14:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 73055 ']' 00:12:14.796 03:14:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 73055 00:12:14.796 03:14:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:12:14.796 03:14:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:14.796 03:14:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73055 00:12:14.796 03:14:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:14.796 03:14:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:14.796 killing process with pid 73055 00:12:14.797 03:14:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73055' 00:12:14.797 03:14:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 73055 00:12:14.797 [2024-10-09 03:14:57.865251] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:14.797 03:14:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 73055 00:12:15.056 [2024-10-09 03:14:58.238456] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:16.436 03:14:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:16.436 03:14:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.tFOoNy6wXW 00:12:16.436 03:14:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:16.436 03:14:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.73 00:12:16.436 03:14:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:16.436 ************************************ 00:12:16.436 END TEST raid_read_error_test 00:12:16.436 ************************************ 00:12:16.436 03:14:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:16.436 03:14:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:16.436 03:14:59 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.73 != \0\.\0\0 ]] 00:12:16.436 00:12:16.436 real 0m4.968s 00:12:16.436 user 0m5.624s 00:12:16.436 sys 0m0.705s 00:12:16.436 03:14:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:16.436 03:14:59 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.696 03:14:59 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:12:16.696 03:14:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:16.696 03:14:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:16.696 03:14:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:16.696 ************************************ 00:12:16.696 START TEST raid_write_error_test 00:12:16.696 ************************************ 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test concat 4 write 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.7kegNJ3vsv 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73212 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73212 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 73212 ']' 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:16.696 03:14:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:16.696 [2024-10-09 03:14:59.882266] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:12:16.696 [2024-10-09 03:14:59.882511] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73212 ] 00:12:16.955 [2024-10-09 03:15:00.053705] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:17.215 [2024-10-09 03:15:00.306718] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.474 [2024-10-09 03:15:00.547030] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:17.474 [2024-10-09 03:15:00.547154] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:17.474 03:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:17.474 03:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:17.474 03:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:17.474 03:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:17.474 03:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.474 03:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.474 BaseBdev1_malloc 00:12:17.474 03:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.474 03:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:17.474 03:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.474 03:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.474 true 00:12:17.474 03:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.474 03:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:17.474 03:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.474 03:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.735 [2024-10-09 03:15:00.779190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:17.735 [2024-10-09 03:15:00.779251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.735 [2024-10-09 03:15:00.779269] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:17.735 [2024-10-09 03:15:00.779281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.735 [2024-10-09 03:15:00.781643] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.735 [2024-10-09 03:15:00.781680] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:17.735 BaseBdev1 00:12:17.735 03:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.735 03:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:17.735 03:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:17.735 03:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.735 03:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.735 BaseBdev2_malloc 00:12:17.735 03:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.735 03:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:17.735 03:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.735 03:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.735 true 00:12:17.735 03:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.735 03:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:17.735 03:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.735 03:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.735 [2024-10-09 03:15:00.882202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:17.735 [2024-10-09 03:15:00.882254] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.735 [2024-10-09 03:15:00.882272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:17.735 [2024-10-09 03:15:00.882284] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.735 [2024-10-09 03:15:00.884575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.735 [2024-10-09 03:15:00.884614] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:17.735 BaseBdev2 00:12:17.735 03:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.735 03:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:17.735 03:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:17.735 03:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.735 03:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.735 BaseBdev3_malloc 00:12:17.735 03:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.735 03:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:17.735 03:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.735 03:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.735 true 00:12:17.735 03:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.735 03:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:17.735 03:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.735 03:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.735 [2024-10-09 03:15:00.951187] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:17.735 [2024-10-09 03:15:00.951237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.735 [2024-10-09 03:15:00.951254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:17.735 [2024-10-09 03:15:00.951265] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.735 [2024-10-09 03:15:00.953559] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.735 [2024-10-09 03:15:00.953595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:17.735 BaseBdev3 00:12:17.735 03:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.735 03:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:17.735 03:15:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:17.735 03:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.735 03:15:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.735 BaseBdev4_malloc 00:12:17.735 03:15:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.735 03:15:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:17.735 03:15:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.735 03:15:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.735 true 00:12:17.735 03:15:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.735 03:15:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:17.735 03:15:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.735 03:15:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.735 [2024-10-09 03:15:01.025145] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:17.735 [2024-10-09 03:15:01.025194] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:17.735 [2024-10-09 03:15:01.025211] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:17.735 [2024-10-09 03:15:01.025224] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:17.735 [2024-10-09 03:15:01.027522] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:17.735 [2024-10-09 03:15:01.027558] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:17.735 BaseBdev4 00:12:17.735 03:15:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.736 03:15:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:17.736 03:15:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.736 03:15:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.736 [2024-10-09 03:15:01.037208] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:17.996 [2024-10-09 03:15:01.039237] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:17.996 [2024-10-09 03:15:01.039317] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:17.996 [2024-10-09 03:15:01.039380] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:17.996 [2024-10-09 03:15:01.039601] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:17.996 [2024-10-09 03:15:01.039622] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:17.996 [2024-10-09 03:15:01.039878] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:17.996 [2024-10-09 03:15:01.040044] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:17.996 [2024-10-09 03:15:01.040058] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:17.996 [2024-10-09 03:15:01.040205] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:17.996 03:15:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.996 03:15:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:17.996 03:15:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:17.996 03:15:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:17.996 03:15:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:17.996 03:15:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.996 03:15:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:17.996 03:15:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.996 03:15:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.996 03:15:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.996 03:15:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.996 03:15:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.996 03:15:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:17.996 03:15:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:17.996 03:15:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:17.996 03:15:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:17.996 03:15:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.996 "name": "raid_bdev1", 00:12:17.996 "uuid": "198f8999-e9d5-43a5-b367-0807cfcceb80", 00:12:17.996 "strip_size_kb": 64, 00:12:17.996 "state": "online", 00:12:17.996 "raid_level": "concat", 00:12:17.996 "superblock": true, 00:12:17.996 "num_base_bdevs": 4, 00:12:17.996 "num_base_bdevs_discovered": 4, 00:12:17.996 "num_base_bdevs_operational": 4, 00:12:17.996 "base_bdevs_list": [ 00:12:17.996 { 00:12:17.996 "name": "BaseBdev1", 00:12:17.996 "uuid": "cb126a98-eab0-56cd-8a54-4371ffa37d98", 00:12:17.996 "is_configured": true, 00:12:17.996 "data_offset": 2048, 00:12:17.996 "data_size": 63488 00:12:17.996 }, 00:12:17.996 { 00:12:17.996 "name": "BaseBdev2", 00:12:17.996 "uuid": "8bb86fcc-8559-5412-9c18-8ae974be92fe", 00:12:17.996 "is_configured": true, 00:12:17.996 "data_offset": 2048, 00:12:17.996 "data_size": 63488 00:12:17.996 }, 00:12:17.996 { 00:12:17.996 "name": "BaseBdev3", 00:12:17.996 "uuid": "f98e75ec-ac79-5fc1-a365-93394600ce0b", 00:12:17.996 "is_configured": true, 00:12:17.996 "data_offset": 2048, 00:12:17.996 "data_size": 63488 00:12:17.996 }, 00:12:17.996 { 00:12:17.996 "name": "BaseBdev4", 00:12:17.996 "uuid": "9d335d8b-967b-50a6-ac13-10b138720ed4", 00:12:17.996 "is_configured": true, 00:12:17.996 "data_offset": 2048, 00:12:17.996 "data_size": 63488 00:12:17.996 } 00:12:17.996 ] 00:12:17.996 }' 00:12:17.996 03:15:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.996 03:15:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.256 03:15:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:18.256 03:15:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:18.515 [2024-10-09 03:15:01.589795] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:19.452 03:15:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:19.452 03:15:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.452 03:15:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.452 03:15:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.452 03:15:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:19.452 03:15:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:12:19.452 03:15:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:19.452 03:15:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:12:19.452 03:15:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:19.452 03:15:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:19.452 03:15:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:19.452 03:15:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:19.452 03:15:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:19.452 03:15:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:19.452 03:15:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:19.452 03:15:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:19.452 03:15:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:19.452 03:15:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:19.452 03:15:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:19.452 03:15:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.453 03:15:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.453 03:15:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.453 03:15:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:19.453 "name": "raid_bdev1", 00:12:19.453 "uuid": "198f8999-e9d5-43a5-b367-0807cfcceb80", 00:12:19.453 "strip_size_kb": 64, 00:12:19.453 "state": "online", 00:12:19.453 "raid_level": "concat", 00:12:19.453 "superblock": true, 00:12:19.453 "num_base_bdevs": 4, 00:12:19.453 "num_base_bdevs_discovered": 4, 00:12:19.453 "num_base_bdevs_operational": 4, 00:12:19.453 "base_bdevs_list": [ 00:12:19.453 { 00:12:19.453 "name": "BaseBdev1", 00:12:19.453 "uuid": "cb126a98-eab0-56cd-8a54-4371ffa37d98", 00:12:19.453 "is_configured": true, 00:12:19.453 "data_offset": 2048, 00:12:19.453 "data_size": 63488 00:12:19.453 }, 00:12:19.453 { 00:12:19.453 "name": "BaseBdev2", 00:12:19.453 "uuid": "8bb86fcc-8559-5412-9c18-8ae974be92fe", 00:12:19.453 "is_configured": true, 00:12:19.453 "data_offset": 2048, 00:12:19.453 "data_size": 63488 00:12:19.453 }, 00:12:19.453 { 00:12:19.453 "name": "BaseBdev3", 00:12:19.453 "uuid": "f98e75ec-ac79-5fc1-a365-93394600ce0b", 00:12:19.453 "is_configured": true, 00:12:19.453 "data_offset": 2048, 00:12:19.453 "data_size": 63488 00:12:19.453 }, 00:12:19.453 { 00:12:19.453 "name": "BaseBdev4", 00:12:19.453 "uuid": "9d335d8b-967b-50a6-ac13-10b138720ed4", 00:12:19.453 "is_configured": true, 00:12:19.453 "data_offset": 2048, 00:12:19.453 "data_size": 63488 00:12:19.453 } 00:12:19.453 ] 00:12:19.453 }' 00:12:19.453 03:15:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:19.453 03:15:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.712 03:15:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:19.712 03:15:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.712 03:15:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:19.712 [2024-10-09 03:15:02.990522] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:19.712 [2024-10-09 03:15:02.990639] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:19.712 [2024-10-09 03:15:02.993252] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:19.712 [2024-10-09 03:15:02.993358] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:19.712 [2024-10-09 03:15:02.993440] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:19.712 [2024-10-09 03:15:02.993487] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:19.712 { 00:12:19.712 "results": [ 00:12:19.712 { 00:12:19.712 "job": "raid_bdev1", 00:12:19.713 "core_mask": "0x1", 00:12:19.713 "workload": "randrw", 00:12:19.713 "percentage": 50, 00:12:19.713 "status": "finished", 00:12:19.713 "queue_depth": 1, 00:12:19.713 "io_size": 131072, 00:12:19.713 "runtime": 1.40148, 00:12:19.713 "iops": 14150.04138482176, 00:12:19.713 "mibps": 1768.75517310272, 00:12:19.713 "io_failed": 1, 00:12:19.713 "io_timeout": 0, 00:12:19.713 "avg_latency_us": 99.63048489407089, 00:12:19.713 "min_latency_us": 25.823580786026202, 00:12:19.713 "max_latency_us": 1359.3711790393013 00:12:19.713 } 00:12:19.713 ], 00:12:19.713 "core_count": 1 00:12:19.713 } 00:12:19.713 03:15:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.713 03:15:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73212 00:12:19.713 03:15:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 73212 ']' 00:12:19.713 03:15:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 73212 00:12:19.713 03:15:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:12:19.713 03:15:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:19.713 03:15:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73212 00:12:19.972 03:15:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:19.972 killing process with pid 73212 00:12:19.972 03:15:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:19.972 03:15:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73212' 00:12:19.972 03:15:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 73212 00:12:19.972 [2024-10-09 03:15:03.042717] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:19.972 03:15:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 73212 00:12:20.231 [2024-10-09 03:15:03.389420] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:21.610 03:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:21.610 03:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.7kegNJ3vsv 00:12:21.610 03:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:21.610 03:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:12:21.610 03:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:12:21.610 03:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:21.610 03:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:21.610 03:15:04 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:12:21.610 ************************************ 00:12:21.610 END TEST raid_write_error_test 00:12:21.610 ************************************ 00:12:21.610 00:12:21.610 real 0m5.059s 00:12:21.610 user 0m5.756s 00:12:21.610 sys 0m0.761s 00:12:21.610 03:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:21.610 03:15:04 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.610 03:15:04 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:21.610 03:15:04 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:12:21.610 03:15:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:21.610 03:15:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:21.610 03:15:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:21.610 ************************************ 00:12:21.610 START TEST raid_state_function_test 00:12:21.610 ************************************ 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 false 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73361 00:12:21.611 Process raid pid: 73361 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73361' 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73361 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 73361 ']' 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:21.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:21.611 03:15:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.871 [2024-10-09 03:15:04.997649] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:12:21.871 [2024-10-09 03:15:04.997871] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.871 [2024-10-09 03:15:05.163876] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:22.129 [2024-10-09 03:15:05.416160] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:22.388 [2024-10-09 03:15:05.661210] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:22.388 [2024-10-09 03:15:05.661251] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:22.647 03:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:22.647 03:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:12:22.647 03:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:22.647 03:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.647 03:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.647 [2024-10-09 03:15:05.864115] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:22.647 [2024-10-09 03:15:05.864222] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:22.647 [2024-10-09 03:15:05.864262] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:22.647 [2024-10-09 03:15:05.864289] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:22.647 [2024-10-09 03:15:05.864326] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:22.647 [2024-10-09 03:15:05.864349] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:22.647 [2024-10-09 03:15:05.864378] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:22.647 [2024-10-09 03:15:05.864401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:22.647 03:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.647 03:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:22.647 03:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:22.647 03:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:22.647 03:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:22.647 03:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:22.648 03:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.648 03:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.648 03:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.648 03:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.648 03:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.648 03:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.648 03:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.648 03:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.648 03:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:22.648 03:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.648 03:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.648 "name": "Existed_Raid", 00:12:22.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.648 "strip_size_kb": 0, 00:12:22.648 "state": "configuring", 00:12:22.648 "raid_level": "raid1", 00:12:22.648 "superblock": false, 00:12:22.648 "num_base_bdevs": 4, 00:12:22.648 "num_base_bdevs_discovered": 0, 00:12:22.648 "num_base_bdevs_operational": 4, 00:12:22.648 "base_bdevs_list": [ 00:12:22.648 { 00:12:22.648 "name": "BaseBdev1", 00:12:22.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.648 "is_configured": false, 00:12:22.648 "data_offset": 0, 00:12:22.648 "data_size": 0 00:12:22.648 }, 00:12:22.648 { 00:12:22.648 "name": "BaseBdev2", 00:12:22.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.648 "is_configured": false, 00:12:22.648 "data_offset": 0, 00:12:22.648 "data_size": 0 00:12:22.648 }, 00:12:22.648 { 00:12:22.648 "name": "BaseBdev3", 00:12:22.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.648 "is_configured": false, 00:12:22.648 "data_offset": 0, 00:12:22.648 "data_size": 0 00:12:22.648 }, 00:12:22.648 { 00:12:22.648 "name": "BaseBdev4", 00:12:22.648 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:22.648 "is_configured": false, 00:12:22.648 "data_offset": 0, 00:12:22.648 "data_size": 0 00:12:22.648 } 00:12:22.648 ] 00:12:22.648 }' 00:12:22.648 03:15:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.648 03:15:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.219 [2024-10-09 03:15:06.315306] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:23.219 [2024-10-09 03:15:06.315380] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.219 [2024-10-09 03:15:06.323319] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:23.219 [2024-10-09 03:15:06.323361] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:23.219 [2024-10-09 03:15:06.323370] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:23.219 [2024-10-09 03:15:06.323379] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:23.219 [2024-10-09 03:15:06.323385] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:23.219 [2024-10-09 03:15:06.323393] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:23.219 [2024-10-09 03:15:06.323399] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:23.219 [2024-10-09 03:15:06.323408] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.219 [2024-10-09 03:15:06.409471] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:23.219 BaseBdev1 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.219 [ 00:12:23.219 { 00:12:23.219 "name": "BaseBdev1", 00:12:23.219 "aliases": [ 00:12:23.219 "d3d387cf-e1c6-44fd-869e-8a2ab867f4ce" 00:12:23.219 ], 00:12:23.219 "product_name": "Malloc disk", 00:12:23.219 "block_size": 512, 00:12:23.219 "num_blocks": 65536, 00:12:23.219 "uuid": "d3d387cf-e1c6-44fd-869e-8a2ab867f4ce", 00:12:23.219 "assigned_rate_limits": { 00:12:23.219 "rw_ios_per_sec": 0, 00:12:23.219 "rw_mbytes_per_sec": 0, 00:12:23.219 "r_mbytes_per_sec": 0, 00:12:23.219 "w_mbytes_per_sec": 0 00:12:23.219 }, 00:12:23.219 "claimed": true, 00:12:23.219 "claim_type": "exclusive_write", 00:12:23.219 "zoned": false, 00:12:23.219 "supported_io_types": { 00:12:23.219 "read": true, 00:12:23.219 "write": true, 00:12:23.219 "unmap": true, 00:12:23.219 "flush": true, 00:12:23.219 "reset": true, 00:12:23.219 "nvme_admin": false, 00:12:23.219 "nvme_io": false, 00:12:23.219 "nvme_io_md": false, 00:12:23.219 "write_zeroes": true, 00:12:23.219 "zcopy": true, 00:12:23.219 "get_zone_info": false, 00:12:23.219 "zone_management": false, 00:12:23.219 "zone_append": false, 00:12:23.219 "compare": false, 00:12:23.219 "compare_and_write": false, 00:12:23.219 "abort": true, 00:12:23.219 "seek_hole": false, 00:12:23.219 "seek_data": false, 00:12:23.219 "copy": true, 00:12:23.219 "nvme_iov_md": false 00:12:23.219 }, 00:12:23.219 "memory_domains": [ 00:12:23.219 { 00:12:23.219 "dma_device_id": "system", 00:12:23.219 "dma_device_type": 1 00:12:23.219 }, 00:12:23.219 { 00:12:23.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.219 "dma_device_type": 2 00:12:23.219 } 00:12:23.219 ], 00:12:23.219 "driver_specific": {} 00:12:23.219 } 00:12:23.219 ] 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.219 03:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.219 "name": "Existed_Raid", 00:12:23.219 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.219 "strip_size_kb": 0, 00:12:23.219 "state": "configuring", 00:12:23.219 "raid_level": "raid1", 00:12:23.219 "superblock": false, 00:12:23.219 "num_base_bdevs": 4, 00:12:23.219 "num_base_bdevs_discovered": 1, 00:12:23.219 "num_base_bdevs_operational": 4, 00:12:23.219 "base_bdevs_list": [ 00:12:23.219 { 00:12:23.219 "name": "BaseBdev1", 00:12:23.219 "uuid": "d3d387cf-e1c6-44fd-869e-8a2ab867f4ce", 00:12:23.219 "is_configured": true, 00:12:23.219 "data_offset": 0, 00:12:23.219 "data_size": 65536 00:12:23.219 }, 00:12:23.220 { 00:12:23.220 "name": "BaseBdev2", 00:12:23.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.220 "is_configured": false, 00:12:23.220 "data_offset": 0, 00:12:23.220 "data_size": 0 00:12:23.220 }, 00:12:23.220 { 00:12:23.220 "name": "BaseBdev3", 00:12:23.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.220 "is_configured": false, 00:12:23.220 "data_offset": 0, 00:12:23.220 "data_size": 0 00:12:23.220 }, 00:12:23.220 { 00:12:23.220 "name": "BaseBdev4", 00:12:23.220 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.220 "is_configured": false, 00:12:23.220 "data_offset": 0, 00:12:23.220 "data_size": 0 00:12:23.220 } 00:12:23.220 ] 00:12:23.220 }' 00:12:23.220 03:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.220 03:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.790 03:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:23.790 03:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.790 03:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.790 [2024-10-09 03:15:06.908639] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:23.790 [2024-10-09 03:15:06.908741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:23.790 03:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.790 03:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:23.790 03:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.790 03:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.790 [2024-10-09 03:15:06.920701] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:23.790 [2024-10-09 03:15:06.922797] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:23.790 [2024-10-09 03:15:06.922892] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:23.790 [2024-10-09 03:15:06.922931] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:23.790 [2024-10-09 03:15:06.922957] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:23.790 [2024-10-09 03:15:06.922988] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:23.790 [2024-10-09 03:15:06.923010] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:23.790 03:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.790 03:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:23.790 03:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:23.790 03:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:23.790 03:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:23.790 03:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.790 03:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:23.790 03:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:23.790 03:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:23.790 03:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.790 03:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.790 03:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.790 03:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.790 03:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.790 03:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:23.790 03:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:23.790 03:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.790 03:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.790 03:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.790 "name": "Existed_Raid", 00:12:23.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.790 "strip_size_kb": 0, 00:12:23.790 "state": "configuring", 00:12:23.790 "raid_level": "raid1", 00:12:23.790 "superblock": false, 00:12:23.790 "num_base_bdevs": 4, 00:12:23.790 "num_base_bdevs_discovered": 1, 00:12:23.790 "num_base_bdevs_operational": 4, 00:12:23.790 "base_bdevs_list": [ 00:12:23.790 { 00:12:23.790 "name": "BaseBdev1", 00:12:23.790 "uuid": "d3d387cf-e1c6-44fd-869e-8a2ab867f4ce", 00:12:23.790 "is_configured": true, 00:12:23.790 "data_offset": 0, 00:12:23.790 "data_size": 65536 00:12:23.790 }, 00:12:23.790 { 00:12:23.790 "name": "BaseBdev2", 00:12:23.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.790 "is_configured": false, 00:12:23.790 "data_offset": 0, 00:12:23.790 "data_size": 0 00:12:23.790 }, 00:12:23.790 { 00:12:23.790 "name": "BaseBdev3", 00:12:23.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.790 "is_configured": false, 00:12:23.790 "data_offset": 0, 00:12:23.790 "data_size": 0 00:12:23.790 }, 00:12:23.790 { 00:12:23.790 "name": "BaseBdev4", 00:12:23.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:23.790 "is_configured": false, 00:12:23.790 "data_offset": 0, 00:12:23.790 "data_size": 0 00:12:23.790 } 00:12:23.790 ] 00:12:23.790 }' 00:12:23.790 03:15:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.790 03:15:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.359 03:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:24.359 03:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.359 03:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.359 [2024-10-09 03:15:07.445906] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:24.359 BaseBdev2 00:12:24.359 03:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.359 03:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:24.359 03:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:24.359 03:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:24.359 03:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:24.359 03:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:24.359 03:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:24.359 03:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:24.359 03:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.359 03:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.359 03:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.359 03:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:24.359 03:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.359 03:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.359 [ 00:12:24.359 { 00:12:24.359 "name": "BaseBdev2", 00:12:24.359 "aliases": [ 00:12:24.359 "a43ba366-bc29-4563-b4e6-e7188034f691" 00:12:24.359 ], 00:12:24.359 "product_name": "Malloc disk", 00:12:24.359 "block_size": 512, 00:12:24.359 "num_blocks": 65536, 00:12:24.359 "uuid": "a43ba366-bc29-4563-b4e6-e7188034f691", 00:12:24.359 "assigned_rate_limits": { 00:12:24.359 "rw_ios_per_sec": 0, 00:12:24.359 "rw_mbytes_per_sec": 0, 00:12:24.359 "r_mbytes_per_sec": 0, 00:12:24.359 "w_mbytes_per_sec": 0 00:12:24.359 }, 00:12:24.359 "claimed": true, 00:12:24.359 "claim_type": "exclusive_write", 00:12:24.359 "zoned": false, 00:12:24.359 "supported_io_types": { 00:12:24.359 "read": true, 00:12:24.359 "write": true, 00:12:24.359 "unmap": true, 00:12:24.359 "flush": true, 00:12:24.359 "reset": true, 00:12:24.359 "nvme_admin": false, 00:12:24.359 "nvme_io": false, 00:12:24.359 "nvme_io_md": false, 00:12:24.359 "write_zeroes": true, 00:12:24.359 "zcopy": true, 00:12:24.359 "get_zone_info": false, 00:12:24.359 "zone_management": false, 00:12:24.359 "zone_append": false, 00:12:24.359 "compare": false, 00:12:24.359 "compare_and_write": false, 00:12:24.359 "abort": true, 00:12:24.359 "seek_hole": false, 00:12:24.359 "seek_data": false, 00:12:24.359 "copy": true, 00:12:24.359 "nvme_iov_md": false 00:12:24.359 }, 00:12:24.359 "memory_domains": [ 00:12:24.359 { 00:12:24.359 "dma_device_id": "system", 00:12:24.359 "dma_device_type": 1 00:12:24.359 }, 00:12:24.359 { 00:12:24.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.359 "dma_device_type": 2 00:12:24.359 } 00:12:24.359 ], 00:12:24.359 "driver_specific": {} 00:12:24.359 } 00:12:24.359 ] 00:12:24.359 03:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.359 03:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:24.359 03:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:24.359 03:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:24.359 03:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:24.359 03:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.359 03:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.359 03:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.359 03:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.359 03:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.359 03:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.359 03:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.359 03:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.359 03:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.359 03:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.359 03:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.359 03:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.359 03:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.359 03:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.359 03:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.359 "name": "Existed_Raid", 00:12:24.359 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.359 "strip_size_kb": 0, 00:12:24.359 "state": "configuring", 00:12:24.359 "raid_level": "raid1", 00:12:24.359 "superblock": false, 00:12:24.359 "num_base_bdevs": 4, 00:12:24.360 "num_base_bdevs_discovered": 2, 00:12:24.360 "num_base_bdevs_operational": 4, 00:12:24.360 "base_bdevs_list": [ 00:12:24.360 { 00:12:24.360 "name": "BaseBdev1", 00:12:24.360 "uuid": "d3d387cf-e1c6-44fd-869e-8a2ab867f4ce", 00:12:24.360 "is_configured": true, 00:12:24.360 "data_offset": 0, 00:12:24.360 "data_size": 65536 00:12:24.360 }, 00:12:24.360 { 00:12:24.360 "name": "BaseBdev2", 00:12:24.360 "uuid": "a43ba366-bc29-4563-b4e6-e7188034f691", 00:12:24.360 "is_configured": true, 00:12:24.360 "data_offset": 0, 00:12:24.360 "data_size": 65536 00:12:24.360 }, 00:12:24.360 { 00:12:24.360 "name": "BaseBdev3", 00:12:24.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.360 "is_configured": false, 00:12:24.360 "data_offset": 0, 00:12:24.360 "data_size": 0 00:12:24.360 }, 00:12:24.360 { 00:12:24.360 "name": "BaseBdev4", 00:12:24.360 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.360 "is_configured": false, 00:12:24.360 "data_offset": 0, 00:12:24.360 "data_size": 0 00:12:24.360 } 00:12:24.360 ] 00:12:24.360 }' 00:12:24.360 03:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.360 03:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.929 03:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:24.929 03:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.929 03:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.929 [2024-10-09 03:15:07.982316] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:24.929 BaseBdev3 00:12:24.929 03:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.929 03:15:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:24.929 03:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:24.929 03:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:24.929 03:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:24.929 03:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:24.929 03:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:24.929 03:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:24.929 03:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.929 03:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.929 03:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.929 03:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:24.929 03:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.929 03:15:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.929 [ 00:12:24.929 { 00:12:24.929 "name": "BaseBdev3", 00:12:24.929 "aliases": [ 00:12:24.929 "841a218d-951c-4e15-b3ee-37e502924eca" 00:12:24.929 ], 00:12:24.929 "product_name": "Malloc disk", 00:12:24.929 "block_size": 512, 00:12:24.929 "num_blocks": 65536, 00:12:24.929 "uuid": "841a218d-951c-4e15-b3ee-37e502924eca", 00:12:24.929 "assigned_rate_limits": { 00:12:24.929 "rw_ios_per_sec": 0, 00:12:24.929 "rw_mbytes_per_sec": 0, 00:12:24.929 "r_mbytes_per_sec": 0, 00:12:24.929 "w_mbytes_per_sec": 0 00:12:24.929 }, 00:12:24.929 "claimed": true, 00:12:24.929 "claim_type": "exclusive_write", 00:12:24.929 "zoned": false, 00:12:24.929 "supported_io_types": { 00:12:24.929 "read": true, 00:12:24.929 "write": true, 00:12:24.929 "unmap": true, 00:12:24.929 "flush": true, 00:12:24.929 "reset": true, 00:12:24.929 "nvme_admin": false, 00:12:24.929 "nvme_io": false, 00:12:24.929 "nvme_io_md": false, 00:12:24.929 "write_zeroes": true, 00:12:24.929 "zcopy": true, 00:12:24.929 "get_zone_info": false, 00:12:24.929 "zone_management": false, 00:12:24.929 "zone_append": false, 00:12:24.929 "compare": false, 00:12:24.929 "compare_and_write": false, 00:12:24.929 "abort": true, 00:12:24.929 "seek_hole": false, 00:12:24.929 "seek_data": false, 00:12:24.929 "copy": true, 00:12:24.929 "nvme_iov_md": false 00:12:24.929 }, 00:12:24.929 "memory_domains": [ 00:12:24.929 { 00:12:24.929 "dma_device_id": "system", 00:12:24.929 "dma_device_type": 1 00:12:24.929 }, 00:12:24.929 { 00:12:24.929 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:24.929 "dma_device_type": 2 00:12:24.929 } 00:12:24.929 ], 00:12:24.929 "driver_specific": {} 00:12:24.929 } 00:12:24.929 ] 00:12:24.929 03:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.929 03:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:24.929 03:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:24.929 03:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:24.929 03:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:24.929 03:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:24.929 03:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.929 03:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:24.929 03:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:24.929 03:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.929 03:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.930 03:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.930 03:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.930 03:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.930 03:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.930 03:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:24.930 03:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:24.930 03:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.930 03:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:24.930 03:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.930 "name": "Existed_Raid", 00:12:24.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.930 "strip_size_kb": 0, 00:12:24.930 "state": "configuring", 00:12:24.930 "raid_level": "raid1", 00:12:24.930 "superblock": false, 00:12:24.930 "num_base_bdevs": 4, 00:12:24.930 "num_base_bdevs_discovered": 3, 00:12:24.930 "num_base_bdevs_operational": 4, 00:12:24.930 "base_bdevs_list": [ 00:12:24.930 { 00:12:24.930 "name": "BaseBdev1", 00:12:24.930 "uuid": "d3d387cf-e1c6-44fd-869e-8a2ab867f4ce", 00:12:24.930 "is_configured": true, 00:12:24.930 "data_offset": 0, 00:12:24.930 "data_size": 65536 00:12:24.930 }, 00:12:24.930 { 00:12:24.930 "name": "BaseBdev2", 00:12:24.930 "uuid": "a43ba366-bc29-4563-b4e6-e7188034f691", 00:12:24.930 "is_configured": true, 00:12:24.930 "data_offset": 0, 00:12:24.930 "data_size": 65536 00:12:24.930 }, 00:12:24.930 { 00:12:24.930 "name": "BaseBdev3", 00:12:24.930 "uuid": "841a218d-951c-4e15-b3ee-37e502924eca", 00:12:24.930 "is_configured": true, 00:12:24.930 "data_offset": 0, 00:12:24.930 "data_size": 65536 00:12:24.930 }, 00:12:24.930 { 00:12:24.930 "name": "BaseBdev4", 00:12:24.930 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:24.930 "is_configured": false, 00:12:24.930 "data_offset": 0, 00:12:24.930 "data_size": 0 00:12:24.930 } 00:12:24.930 ] 00:12:24.930 }' 00:12:24.930 03:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.930 03:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.189 03:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:25.189 03:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.189 03:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.449 [2024-10-09 03:15:08.535448] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:25.449 [2024-10-09 03:15:08.535598] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:25.449 [2024-10-09 03:15:08.535624] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:25.449 [2024-10-09 03:15:08.535985] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:25.449 [2024-10-09 03:15:08.536216] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:25.449 [2024-10-09 03:15:08.536263] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:25.449 [2024-10-09 03:15:08.536584] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:25.449 BaseBdev4 00:12:25.449 03:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.449 03:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:25.449 03:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:25.449 03:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:25.449 03:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:25.449 03:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:25.449 03:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:25.449 03:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:25.449 03:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.449 03:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.449 03:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.449 03:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:25.449 03:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.449 03:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.449 [ 00:12:25.449 { 00:12:25.449 "name": "BaseBdev4", 00:12:25.449 "aliases": [ 00:12:25.449 "0d290a12-47a7-41e3-b8d2-efd709895e94" 00:12:25.449 ], 00:12:25.449 "product_name": "Malloc disk", 00:12:25.449 "block_size": 512, 00:12:25.449 "num_blocks": 65536, 00:12:25.449 "uuid": "0d290a12-47a7-41e3-b8d2-efd709895e94", 00:12:25.449 "assigned_rate_limits": { 00:12:25.449 "rw_ios_per_sec": 0, 00:12:25.449 "rw_mbytes_per_sec": 0, 00:12:25.449 "r_mbytes_per_sec": 0, 00:12:25.449 "w_mbytes_per_sec": 0 00:12:25.449 }, 00:12:25.449 "claimed": true, 00:12:25.449 "claim_type": "exclusive_write", 00:12:25.449 "zoned": false, 00:12:25.449 "supported_io_types": { 00:12:25.449 "read": true, 00:12:25.449 "write": true, 00:12:25.449 "unmap": true, 00:12:25.449 "flush": true, 00:12:25.449 "reset": true, 00:12:25.449 "nvme_admin": false, 00:12:25.449 "nvme_io": false, 00:12:25.449 "nvme_io_md": false, 00:12:25.449 "write_zeroes": true, 00:12:25.449 "zcopy": true, 00:12:25.449 "get_zone_info": false, 00:12:25.449 "zone_management": false, 00:12:25.449 "zone_append": false, 00:12:25.449 "compare": false, 00:12:25.450 "compare_and_write": false, 00:12:25.450 "abort": true, 00:12:25.450 "seek_hole": false, 00:12:25.450 "seek_data": false, 00:12:25.450 "copy": true, 00:12:25.450 "nvme_iov_md": false 00:12:25.450 }, 00:12:25.450 "memory_domains": [ 00:12:25.450 { 00:12:25.450 "dma_device_id": "system", 00:12:25.450 "dma_device_type": 1 00:12:25.450 }, 00:12:25.450 { 00:12:25.450 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.450 "dma_device_type": 2 00:12:25.450 } 00:12:25.450 ], 00:12:25.450 "driver_specific": {} 00:12:25.450 } 00:12:25.450 ] 00:12:25.450 03:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.450 03:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:25.450 03:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:25.450 03:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:25.450 03:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:25.450 03:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:25.450 03:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:25.450 03:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:25.450 03:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:25.450 03:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:25.450 03:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:25.450 03:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:25.450 03:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:25.450 03:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:25.450 03:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:25.450 03:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:25.450 03:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:25.450 03:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.450 03:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:25.450 03:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:25.450 "name": "Existed_Raid", 00:12:25.450 "uuid": "2dc3e7ac-b670-4bd1-9b87-c79bd41abe72", 00:12:25.450 "strip_size_kb": 0, 00:12:25.450 "state": "online", 00:12:25.450 "raid_level": "raid1", 00:12:25.450 "superblock": false, 00:12:25.450 "num_base_bdevs": 4, 00:12:25.450 "num_base_bdevs_discovered": 4, 00:12:25.450 "num_base_bdevs_operational": 4, 00:12:25.450 "base_bdevs_list": [ 00:12:25.450 { 00:12:25.450 "name": "BaseBdev1", 00:12:25.450 "uuid": "d3d387cf-e1c6-44fd-869e-8a2ab867f4ce", 00:12:25.450 "is_configured": true, 00:12:25.450 "data_offset": 0, 00:12:25.450 "data_size": 65536 00:12:25.450 }, 00:12:25.450 { 00:12:25.450 "name": "BaseBdev2", 00:12:25.450 "uuid": "a43ba366-bc29-4563-b4e6-e7188034f691", 00:12:25.450 "is_configured": true, 00:12:25.450 "data_offset": 0, 00:12:25.450 "data_size": 65536 00:12:25.450 }, 00:12:25.450 { 00:12:25.450 "name": "BaseBdev3", 00:12:25.450 "uuid": "841a218d-951c-4e15-b3ee-37e502924eca", 00:12:25.450 "is_configured": true, 00:12:25.450 "data_offset": 0, 00:12:25.450 "data_size": 65536 00:12:25.450 }, 00:12:25.450 { 00:12:25.450 "name": "BaseBdev4", 00:12:25.450 "uuid": "0d290a12-47a7-41e3-b8d2-efd709895e94", 00:12:25.450 "is_configured": true, 00:12:25.450 "data_offset": 0, 00:12:25.450 "data_size": 65536 00:12:25.450 } 00:12:25.450 ] 00:12:25.450 }' 00:12:25.450 03:15:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:25.450 03:15:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.020 [2024-10-09 03:15:09.039018] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:26.020 "name": "Existed_Raid", 00:12:26.020 "aliases": [ 00:12:26.020 "2dc3e7ac-b670-4bd1-9b87-c79bd41abe72" 00:12:26.020 ], 00:12:26.020 "product_name": "Raid Volume", 00:12:26.020 "block_size": 512, 00:12:26.020 "num_blocks": 65536, 00:12:26.020 "uuid": "2dc3e7ac-b670-4bd1-9b87-c79bd41abe72", 00:12:26.020 "assigned_rate_limits": { 00:12:26.020 "rw_ios_per_sec": 0, 00:12:26.020 "rw_mbytes_per_sec": 0, 00:12:26.020 "r_mbytes_per_sec": 0, 00:12:26.020 "w_mbytes_per_sec": 0 00:12:26.020 }, 00:12:26.020 "claimed": false, 00:12:26.020 "zoned": false, 00:12:26.020 "supported_io_types": { 00:12:26.020 "read": true, 00:12:26.020 "write": true, 00:12:26.020 "unmap": false, 00:12:26.020 "flush": false, 00:12:26.020 "reset": true, 00:12:26.020 "nvme_admin": false, 00:12:26.020 "nvme_io": false, 00:12:26.020 "nvme_io_md": false, 00:12:26.020 "write_zeroes": true, 00:12:26.020 "zcopy": false, 00:12:26.020 "get_zone_info": false, 00:12:26.020 "zone_management": false, 00:12:26.020 "zone_append": false, 00:12:26.020 "compare": false, 00:12:26.020 "compare_and_write": false, 00:12:26.020 "abort": false, 00:12:26.020 "seek_hole": false, 00:12:26.020 "seek_data": false, 00:12:26.020 "copy": false, 00:12:26.020 "nvme_iov_md": false 00:12:26.020 }, 00:12:26.020 "memory_domains": [ 00:12:26.020 { 00:12:26.020 "dma_device_id": "system", 00:12:26.020 "dma_device_type": 1 00:12:26.020 }, 00:12:26.020 { 00:12:26.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.020 "dma_device_type": 2 00:12:26.020 }, 00:12:26.020 { 00:12:26.020 "dma_device_id": "system", 00:12:26.020 "dma_device_type": 1 00:12:26.020 }, 00:12:26.020 { 00:12:26.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.020 "dma_device_type": 2 00:12:26.020 }, 00:12:26.020 { 00:12:26.020 "dma_device_id": "system", 00:12:26.020 "dma_device_type": 1 00:12:26.020 }, 00:12:26.020 { 00:12:26.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.020 "dma_device_type": 2 00:12:26.020 }, 00:12:26.020 { 00:12:26.020 "dma_device_id": "system", 00:12:26.020 "dma_device_type": 1 00:12:26.020 }, 00:12:26.020 { 00:12:26.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:26.020 "dma_device_type": 2 00:12:26.020 } 00:12:26.020 ], 00:12:26.020 "driver_specific": { 00:12:26.020 "raid": { 00:12:26.020 "uuid": "2dc3e7ac-b670-4bd1-9b87-c79bd41abe72", 00:12:26.020 "strip_size_kb": 0, 00:12:26.020 "state": "online", 00:12:26.020 "raid_level": "raid1", 00:12:26.020 "superblock": false, 00:12:26.020 "num_base_bdevs": 4, 00:12:26.020 "num_base_bdevs_discovered": 4, 00:12:26.020 "num_base_bdevs_operational": 4, 00:12:26.020 "base_bdevs_list": [ 00:12:26.020 { 00:12:26.020 "name": "BaseBdev1", 00:12:26.020 "uuid": "d3d387cf-e1c6-44fd-869e-8a2ab867f4ce", 00:12:26.020 "is_configured": true, 00:12:26.020 "data_offset": 0, 00:12:26.020 "data_size": 65536 00:12:26.020 }, 00:12:26.020 { 00:12:26.020 "name": "BaseBdev2", 00:12:26.020 "uuid": "a43ba366-bc29-4563-b4e6-e7188034f691", 00:12:26.020 "is_configured": true, 00:12:26.020 "data_offset": 0, 00:12:26.020 "data_size": 65536 00:12:26.020 }, 00:12:26.020 { 00:12:26.020 "name": "BaseBdev3", 00:12:26.020 "uuid": "841a218d-951c-4e15-b3ee-37e502924eca", 00:12:26.020 "is_configured": true, 00:12:26.020 "data_offset": 0, 00:12:26.020 "data_size": 65536 00:12:26.020 }, 00:12:26.020 { 00:12:26.020 "name": "BaseBdev4", 00:12:26.020 "uuid": "0d290a12-47a7-41e3-b8d2-efd709895e94", 00:12:26.020 "is_configured": true, 00:12:26.020 "data_offset": 0, 00:12:26.020 "data_size": 65536 00:12:26.020 } 00:12:26.020 ] 00:12:26.020 } 00:12:26.020 } 00:12:26.020 }' 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:26.020 BaseBdev2 00:12:26.020 BaseBdev3 00:12:26.020 BaseBdev4' 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:26.020 03:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.280 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:26.280 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:26.280 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:26.280 03:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.280 03:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.280 [2024-10-09 03:15:09.338162] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:26.280 03:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.280 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:26.281 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:26.281 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:26.281 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:26.281 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:26.281 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:26.281 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:26.281 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:26.281 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:26.281 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:26.281 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:26.281 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:26.281 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:26.281 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:26.281 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:26.281 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.281 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:26.281 03:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.281 03:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.281 03:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.281 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:26.281 "name": "Existed_Raid", 00:12:26.281 "uuid": "2dc3e7ac-b670-4bd1-9b87-c79bd41abe72", 00:12:26.281 "strip_size_kb": 0, 00:12:26.281 "state": "online", 00:12:26.281 "raid_level": "raid1", 00:12:26.281 "superblock": false, 00:12:26.281 "num_base_bdevs": 4, 00:12:26.281 "num_base_bdevs_discovered": 3, 00:12:26.281 "num_base_bdevs_operational": 3, 00:12:26.281 "base_bdevs_list": [ 00:12:26.281 { 00:12:26.281 "name": null, 00:12:26.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:26.281 "is_configured": false, 00:12:26.281 "data_offset": 0, 00:12:26.281 "data_size": 65536 00:12:26.281 }, 00:12:26.281 { 00:12:26.281 "name": "BaseBdev2", 00:12:26.281 "uuid": "a43ba366-bc29-4563-b4e6-e7188034f691", 00:12:26.281 "is_configured": true, 00:12:26.281 "data_offset": 0, 00:12:26.281 "data_size": 65536 00:12:26.281 }, 00:12:26.281 { 00:12:26.281 "name": "BaseBdev3", 00:12:26.281 "uuid": "841a218d-951c-4e15-b3ee-37e502924eca", 00:12:26.281 "is_configured": true, 00:12:26.281 "data_offset": 0, 00:12:26.281 "data_size": 65536 00:12:26.281 }, 00:12:26.281 { 00:12:26.281 "name": "BaseBdev4", 00:12:26.281 "uuid": "0d290a12-47a7-41e3-b8d2-efd709895e94", 00:12:26.281 "is_configured": true, 00:12:26.281 "data_offset": 0, 00:12:26.281 "data_size": 65536 00:12:26.281 } 00:12:26.281 ] 00:12:26.281 }' 00:12:26.281 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:26.281 03:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.850 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:26.850 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:26.850 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:26.850 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.850 03:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.850 03:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.850 03:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.850 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:26.850 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:26.850 03:15:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:26.850 03:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.850 03:15:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.850 [2024-10-09 03:15:09.934370] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:26.851 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.851 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:26.851 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:26.851 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:26.851 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:26.851 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.851 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.851 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:26.851 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:26.851 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:26.851 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:26.851 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:26.851 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.851 [2024-10-09 03:15:10.095113] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:27.110 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.110 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:27.110 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:27.110 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.110 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:27.110 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.110 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.110 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.110 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:27.110 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:27.110 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:27.110 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.110 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.110 [2024-10-09 03:15:10.258438] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:27.110 [2024-10-09 03:15:10.258595] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:27.110 [2024-10-09 03:15:10.358760] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:27.110 [2024-10-09 03:15:10.358913] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:27.110 [2024-10-09 03:15:10.358934] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:27.110 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.110 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:27.110 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:27.110 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.110 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.110 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:27.110 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.110 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.369 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:27.369 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:27.369 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:27.369 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:27.369 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:27.369 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:27.369 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.369 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.369 BaseBdev2 00:12:27.369 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.369 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:27.369 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:27.369 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:27.369 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:27.369 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:27.369 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:27.369 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:27.369 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.369 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.369 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.369 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:27.369 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.369 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.369 [ 00:12:27.369 { 00:12:27.369 "name": "BaseBdev2", 00:12:27.369 "aliases": [ 00:12:27.370 "028cda0d-9296-4804-b144-4ef1b2cb1d7f" 00:12:27.370 ], 00:12:27.370 "product_name": "Malloc disk", 00:12:27.370 "block_size": 512, 00:12:27.370 "num_blocks": 65536, 00:12:27.370 "uuid": "028cda0d-9296-4804-b144-4ef1b2cb1d7f", 00:12:27.370 "assigned_rate_limits": { 00:12:27.370 "rw_ios_per_sec": 0, 00:12:27.370 "rw_mbytes_per_sec": 0, 00:12:27.370 "r_mbytes_per_sec": 0, 00:12:27.370 "w_mbytes_per_sec": 0 00:12:27.370 }, 00:12:27.370 "claimed": false, 00:12:27.370 "zoned": false, 00:12:27.370 "supported_io_types": { 00:12:27.370 "read": true, 00:12:27.370 "write": true, 00:12:27.370 "unmap": true, 00:12:27.370 "flush": true, 00:12:27.370 "reset": true, 00:12:27.370 "nvme_admin": false, 00:12:27.370 "nvme_io": false, 00:12:27.370 "nvme_io_md": false, 00:12:27.370 "write_zeroes": true, 00:12:27.370 "zcopy": true, 00:12:27.370 "get_zone_info": false, 00:12:27.370 "zone_management": false, 00:12:27.370 "zone_append": false, 00:12:27.370 "compare": false, 00:12:27.370 "compare_and_write": false, 00:12:27.370 "abort": true, 00:12:27.370 "seek_hole": false, 00:12:27.370 "seek_data": false, 00:12:27.370 "copy": true, 00:12:27.370 "nvme_iov_md": false 00:12:27.370 }, 00:12:27.370 "memory_domains": [ 00:12:27.370 { 00:12:27.370 "dma_device_id": "system", 00:12:27.370 "dma_device_type": 1 00:12:27.370 }, 00:12:27.370 { 00:12:27.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.370 "dma_device_type": 2 00:12:27.370 } 00:12:27.370 ], 00:12:27.370 "driver_specific": {} 00:12:27.370 } 00:12:27.370 ] 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.370 BaseBdev3 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.370 [ 00:12:27.370 { 00:12:27.370 "name": "BaseBdev3", 00:12:27.370 "aliases": [ 00:12:27.370 "106c7c94-baaf-4483-8b6a-4196bf33bc8d" 00:12:27.370 ], 00:12:27.370 "product_name": "Malloc disk", 00:12:27.370 "block_size": 512, 00:12:27.370 "num_blocks": 65536, 00:12:27.370 "uuid": "106c7c94-baaf-4483-8b6a-4196bf33bc8d", 00:12:27.370 "assigned_rate_limits": { 00:12:27.370 "rw_ios_per_sec": 0, 00:12:27.370 "rw_mbytes_per_sec": 0, 00:12:27.370 "r_mbytes_per_sec": 0, 00:12:27.370 "w_mbytes_per_sec": 0 00:12:27.370 }, 00:12:27.370 "claimed": false, 00:12:27.370 "zoned": false, 00:12:27.370 "supported_io_types": { 00:12:27.370 "read": true, 00:12:27.370 "write": true, 00:12:27.370 "unmap": true, 00:12:27.370 "flush": true, 00:12:27.370 "reset": true, 00:12:27.370 "nvme_admin": false, 00:12:27.370 "nvme_io": false, 00:12:27.370 "nvme_io_md": false, 00:12:27.370 "write_zeroes": true, 00:12:27.370 "zcopy": true, 00:12:27.370 "get_zone_info": false, 00:12:27.370 "zone_management": false, 00:12:27.370 "zone_append": false, 00:12:27.370 "compare": false, 00:12:27.370 "compare_and_write": false, 00:12:27.370 "abort": true, 00:12:27.370 "seek_hole": false, 00:12:27.370 "seek_data": false, 00:12:27.370 "copy": true, 00:12:27.370 "nvme_iov_md": false 00:12:27.370 }, 00:12:27.370 "memory_domains": [ 00:12:27.370 { 00:12:27.370 "dma_device_id": "system", 00:12:27.370 "dma_device_type": 1 00:12:27.370 }, 00:12:27.370 { 00:12:27.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.370 "dma_device_type": 2 00:12:27.370 } 00:12:27.370 ], 00:12:27.370 "driver_specific": {} 00:12:27.370 } 00:12:27.370 ] 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.370 BaseBdev4 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.370 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.370 [ 00:12:27.370 { 00:12:27.370 "name": "BaseBdev4", 00:12:27.370 "aliases": [ 00:12:27.370 "d40c4ce2-7fdd-4d1f-bd77-f9a723958b8e" 00:12:27.370 ], 00:12:27.370 "product_name": "Malloc disk", 00:12:27.370 "block_size": 512, 00:12:27.370 "num_blocks": 65536, 00:12:27.370 "uuid": "d40c4ce2-7fdd-4d1f-bd77-f9a723958b8e", 00:12:27.370 "assigned_rate_limits": { 00:12:27.370 "rw_ios_per_sec": 0, 00:12:27.370 "rw_mbytes_per_sec": 0, 00:12:27.370 "r_mbytes_per_sec": 0, 00:12:27.370 "w_mbytes_per_sec": 0 00:12:27.370 }, 00:12:27.370 "claimed": false, 00:12:27.370 "zoned": false, 00:12:27.370 "supported_io_types": { 00:12:27.370 "read": true, 00:12:27.370 "write": true, 00:12:27.370 "unmap": true, 00:12:27.370 "flush": true, 00:12:27.370 "reset": true, 00:12:27.370 "nvme_admin": false, 00:12:27.370 "nvme_io": false, 00:12:27.370 "nvme_io_md": false, 00:12:27.370 "write_zeroes": true, 00:12:27.370 "zcopy": true, 00:12:27.370 "get_zone_info": false, 00:12:27.370 "zone_management": false, 00:12:27.370 "zone_append": false, 00:12:27.370 "compare": false, 00:12:27.370 "compare_and_write": false, 00:12:27.370 "abort": true, 00:12:27.370 "seek_hole": false, 00:12:27.370 "seek_data": false, 00:12:27.370 "copy": true, 00:12:27.370 "nvme_iov_md": false 00:12:27.370 }, 00:12:27.370 "memory_domains": [ 00:12:27.370 { 00:12:27.370 "dma_device_id": "system", 00:12:27.370 "dma_device_type": 1 00:12:27.370 }, 00:12:27.370 { 00:12:27.370 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:27.370 "dma_device_type": 2 00:12:27.370 } 00:12:27.371 ], 00:12:27.371 "driver_specific": {} 00:12:27.371 } 00:12:27.371 ] 00:12:27.371 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.371 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:27.371 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:27.371 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:27.371 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:27.371 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.371 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.371 [2024-10-09 03:15:10.668726] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:27.371 [2024-10-09 03:15:10.668827] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:27.371 [2024-10-09 03:15:10.668878] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:27.371 [2024-10-09 03:15:10.670865] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:27.371 [2024-10-09 03:15:10.670950] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:27.630 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.630 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:27.630 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:27.630 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:27.630 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.630 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.630 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.630 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.630 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.630 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.630 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.630 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.630 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.630 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.630 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.630 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.630 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.630 "name": "Existed_Raid", 00:12:27.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.630 "strip_size_kb": 0, 00:12:27.630 "state": "configuring", 00:12:27.630 "raid_level": "raid1", 00:12:27.630 "superblock": false, 00:12:27.630 "num_base_bdevs": 4, 00:12:27.630 "num_base_bdevs_discovered": 3, 00:12:27.630 "num_base_bdevs_operational": 4, 00:12:27.630 "base_bdevs_list": [ 00:12:27.630 { 00:12:27.630 "name": "BaseBdev1", 00:12:27.630 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.630 "is_configured": false, 00:12:27.630 "data_offset": 0, 00:12:27.630 "data_size": 0 00:12:27.630 }, 00:12:27.630 { 00:12:27.630 "name": "BaseBdev2", 00:12:27.630 "uuid": "028cda0d-9296-4804-b144-4ef1b2cb1d7f", 00:12:27.630 "is_configured": true, 00:12:27.630 "data_offset": 0, 00:12:27.630 "data_size": 65536 00:12:27.630 }, 00:12:27.630 { 00:12:27.630 "name": "BaseBdev3", 00:12:27.630 "uuid": "106c7c94-baaf-4483-8b6a-4196bf33bc8d", 00:12:27.630 "is_configured": true, 00:12:27.630 "data_offset": 0, 00:12:27.630 "data_size": 65536 00:12:27.630 }, 00:12:27.630 { 00:12:27.630 "name": "BaseBdev4", 00:12:27.630 "uuid": "d40c4ce2-7fdd-4d1f-bd77-f9a723958b8e", 00:12:27.630 "is_configured": true, 00:12:27.630 "data_offset": 0, 00:12:27.630 "data_size": 65536 00:12:27.630 } 00:12:27.630 ] 00:12:27.630 }' 00:12:27.630 03:15:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.630 03:15:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.890 03:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:27.890 03:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.890 03:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.890 [2024-10-09 03:15:11.120134] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:27.890 03:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.890 03:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:27.890 03:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:27.890 03:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:27.890 03:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:27.890 03:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:27.890 03:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:27.890 03:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:27.890 03:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:27.890 03:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:27.890 03:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:27.890 03:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:27.890 03:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:27.890 03:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:27.890 03:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.890 03:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:27.890 03:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:27.890 "name": "Existed_Raid", 00:12:27.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.890 "strip_size_kb": 0, 00:12:27.890 "state": "configuring", 00:12:27.890 "raid_level": "raid1", 00:12:27.890 "superblock": false, 00:12:27.890 "num_base_bdevs": 4, 00:12:27.890 "num_base_bdevs_discovered": 2, 00:12:27.890 "num_base_bdevs_operational": 4, 00:12:27.890 "base_bdevs_list": [ 00:12:27.890 { 00:12:27.890 "name": "BaseBdev1", 00:12:27.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:27.890 "is_configured": false, 00:12:27.890 "data_offset": 0, 00:12:27.890 "data_size": 0 00:12:27.890 }, 00:12:27.890 { 00:12:27.890 "name": null, 00:12:27.890 "uuid": "028cda0d-9296-4804-b144-4ef1b2cb1d7f", 00:12:27.890 "is_configured": false, 00:12:27.890 "data_offset": 0, 00:12:27.890 "data_size": 65536 00:12:27.890 }, 00:12:27.890 { 00:12:27.890 "name": "BaseBdev3", 00:12:27.890 "uuid": "106c7c94-baaf-4483-8b6a-4196bf33bc8d", 00:12:27.890 "is_configured": true, 00:12:27.890 "data_offset": 0, 00:12:27.890 "data_size": 65536 00:12:27.890 }, 00:12:27.890 { 00:12:27.890 "name": "BaseBdev4", 00:12:27.890 "uuid": "d40c4ce2-7fdd-4d1f-bd77-f9a723958b8e", 00:12:27.890 "is_configured": true, 00:12:27.890 "data_offset": 0, 00:12:27.890 "data_size": 65536 00:12:27.890 } 00:12:27.890 ] 00:12:27.890 }' 00:12:27.890 03:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:27.890 03:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.460 03:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:28.460 03:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.460 03:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.460 03:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.460 03:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.460 03:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:28.460 03:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:28.460 03:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.460 03:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.460 [2024-10-09 03:15:11.616435] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:28.460 BaseBdev1 00:12:28.460 03:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.460 03:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:28.460 03:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:28.460 03:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:28.460 03:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:28.460 03:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:28.460 03:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:28.460 03:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:28.460 03:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.460 03:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.460 03:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.460 03:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:28.460 03:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.460 03:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.460 [ 00:12:28.460 { 00:12:28.460 "name": "BaseBdev1", 00:12:28.460 "aliases": [ 00:12:28.460 "572a8f9b-1531-4318-9a62-340dbc9a8daf" 00:12:28.460 ], 00:12:28.460 "product_name": "Malloc disk", 00:12:28.460 "block_size": 512, 00:12:28.460 "num_blocks": 65536, 00:12:28.460 "uuid": "572a8f9b-1531-4318-9a62-340dbc9a8daf", 00:12:28.460 "assigned_rate_limits": { 00:12:28.460 "rw_ios_per_sec": 0, 00:12:28.460 "rw_mbytes_per_sec": 0, 00:12:28.460 "r_mbytes_per_sec": 0, 00:12:28.460 "w_mbytes_per_sec": 0 00:12:28.460 }, 00:12:28.460 "claimed": true, 00:12:28.460 "claim_type": "exclusive_write", 00:12:28.460 "zoned": false, 00:12:28.460 "supported_io_types": { 00:12:28.460 "read": true, 00:12:28.460 "write": true, 00:12:28.460 "unmap": true, 00:12:28.460 "flush": true, 00:12:28.460 "reset": true, 00:12:28.460 "nvme_admin": false, 00:12:28.460 "nvme_io": false, 00:12:28.460 "nvme_io_md": false, 00:12:28.461 "write_zeroes": true, 00:12:28.461 "zcopy": true, 00:12:28.461 "get_zone_info": false, 00:12:28.461 "zone_management": false, 00:12:28.461 "zone_append": false, 00:12:28.461 "compare": false, 00:12:28.461 "compare_and_write": false, 00:12:28.461 "abort": true, 00:12:28.461 "seek_hole": false, 00:12:28.461 "seek_data": false, 00:12:28.461 "copy": true, 00:12:28.461 "nvme_iov_md": false 00:12:28.461 }, 00:12:28.461 "memory_domains": [ 00:12:28.461 { 00:12:28.461 "dma_device_id": "system", 00:12:28.461 "dma_device_type": 1 00:12:28.461 }, 00:12:28.461 { 00:12:28.461 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.461 "dma_device_type": 2 00:12:28.461 } 00:12:28.461 ], 00:12:28.461 "driver_specific": {} 00:12:28.461 } 00:12:28.461 ] 00:12:28.461 03:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.461 03:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:28.461 03:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:28.461 03:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:28.461 03:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:28.461 03:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:28.461 03:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:28.461 03:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:28.461 03:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.461 03:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.461 03:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.461 03:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.461 03:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.461 03:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.461 03:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.461 03:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:28.461 03:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.461 03:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.461 "name": "Existed_Raid", 00:12:28.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:28.461 "strip_size_kb": 0, 00:12:28.461 "state": "configuring", 00:12:28.461 "raid_level": "raid1", 00:12:28.461 "superblock": false, 00:12:28.461 "num_base_bdevs": 4, 00:12:28.461 "num_base_bdevs_discovered": 3, 00:12:28.461 "num_base_bdevs_operational": 4, 00:12:28.461 "base_bdevs_list": [ 00:12:28.461 { 00:12:28.461 "name": "BaseBdev1", 00:12:28.461 "uuid": "572a8f9b-1531-4318-9a62-340dbc9a8daf", 00:12:28.461 "is_configured": true, 00:12:28.461 "data_offset": 0, 00:12:28.461 "data_size": 65536 00:12:28.461 }, 00:12:28.461 { 00:12:28.461 "name": null, 00:12:28.461 "uuid": "028cda0d-9296-4804-b144-4ef1b2cb1d7f", 00:12:28.461 "is_configured": false, 00:12:28.461 "data_offset": 0, 00:12:28.461 "data_size": 65536 00:12:28.461 }, 00:12:28.461 { 00:12:28.461 "name": "BaseBdev3", 00:12:28.461 "uuid": "106c7c94-baaf-4483-8b6a-4196bf33bc8d", 00:12:28.461 "is_configured": true, 00:12:28.461 "data_offset": 0, 00:12:28.461 "data_size": 65536 00:12:28.461 }, 00:12:28.461 { 00:12:28.461 "name": "BaseBdev4", 00:12:28.461 "uuid": "d40c4ce2-7fdd-4d1f-bd77-f9a723958b8e", 00:12:28.461 "is_configured": true, 00:12:28.461 "data_offset": 0, 00:12:28.461 "data_size": 65536 00:12:28.461 } 00:12:28.461 ] 00:12:28.461 }' 00:12:28.461 03:15:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.461 03:15:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.031 03:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.031 03:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.031 03:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:29.031 03:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.031 03:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.031 03:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:29.031 03:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:29.031 03:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.031 03:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.031 [2024-10-09 03:15:12.127655] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:29.031 03:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.031 03:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:29.031 03:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:29.031 03:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:29.031 03:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.031 03:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.031 03:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:29.031 03:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.031 03:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.031 03:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.031 03:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.031 03:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:29.031 03:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.031 03:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.031 03:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.031 03:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.031 03:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.031 "name": "Existed_Raid", 00:12:29.031 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.031 "strip_size_kb": 0, 00:12:29.031 "state": "configuring", 00:12:29.031 "raid_level": "raid1", 00:12:29.031 "superblock": false, 00:12:29.031 "num_base_bdevs": 4, 00:12:29.031 "num_base_bdevs_discovered": 2, 00:12:29.031 "num_base_bdevs_operational": 4, 00:12:29.031 "base_bdevs_list": [ 00:12:29.031 { 00:12:29.031 "name": "BaseBdev1", 00:12:29.031 "uuid": "572a8f9b-1531-4318-9a62-340dbc9a8daf", 00:12:29.031 "is_configured": true, 00:12:29.031 "data_offset": 0, 00:12:29.031 "data_size": 65536 00:12:29.031 }, 00:12:29.031 { 00:12:29.031 "name": null, 00:12:29.031 "uuid": "028cda0d-9296-4804-b144-4ef1b2cb1d7f", 00:12:29.031 "is_configured": false, 00:12:29.031 "data_offset": 0, 00:12:29.031 "data_size": 65536 00:12:29.031 }, 00:12:29.031 { 00:12:29.031 "name": null, 00:12:29.031 "uuid": "106c7c94-baaf-4483-8b6a-4196bf33bc8d", 00:12:29.031 "is_configured": false, 00:12:29.031 "data_offset": 0, 00:12:29.031 "data_size": 65536 00:12:29.031 }, 00:12:29.031 { 00:12:29.031 "name": "BaseBdev4", 00:12:29.031 "uuid": "d40c4ce2-7fdd-4d1f-bd77-f9a723958b8e", 00:12:29.031 "is_configured": true, 00:12:29.031 "data_offset": 0, 00:12:29.031 "data_size": 65536 00:12:29.031 } 00:12:29.031 ] 00:12:29.031 }' 00:12:29.031 03:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.031 03:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.291 03:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.291 03:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:29.291 03:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.291 03:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.291 03:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.291 03:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:29.291 03:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:29.291 03:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.291 03:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.291 [2024-10-09 03:15:12.578939] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:29.291 03:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.291 03:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:29.291 03:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:29.291 03:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:29.291 03:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:29.291 03:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:29.291 03:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:29.291 03:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.291 03:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.291 03:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.291 03:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.291 03:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.291 03:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.291 03:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.291 03:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:29.550 03:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.550 03:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.550 "name": "Existed_Raid", 00:12:29.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:29.550 "strip_size_kb": 0, 00:12:29.550 "state": "configuring", 00:12:29.550 "raid_level": "raid1", 00:12:29.551 "superblock": false, 00:12:29.551 "num_base_bdevs": 4, 00:12:29.551 "num_base_bdevs_discovered": 3, 00:12:29.551 "num_base_bdevs_operational": 4, 00:12:29.551 "base_bdevs_list": [ 00:12:29.551 { 00:12:29.551 "name": "BaseBdev1", 00:12:29.551 "uuid": "572a8f9b-1531-4318-9a62-340dbc9a8daf", 00:12:29.551 "is_configured": true, 00:12:29.551 "data_offset": 0, 00:12:29.551 "data_size": 65536 00:12:29.551 }, 00:12:29.551 { 00:12:29.551 "name": null, 00:12:29.551 "uuid": "028cda0d-9296-4804-b144-4ef1b2cb1d7f", 00:12:29.551 "is_configured": false, 00:12:29.551 "data_offset": 0, 00:12:29.551 "data_size": 65536 00:12:29.551 }, 00:12:29.551 { 00:12:29.551 "name": "BaseBdev3", 00:12:29.551 "uuid": "106c7c94-baaf-4483-8b6a-4196bf33bc8d", 00:12:29.551 "is_configured": true, 00:12:29.551 "data_offset": 0, 00:12:29.551 "data_size": 65536 00:12:29.551 }, 00:12:29.551 { 00:12:29.551 "name": "BaseBdev4", 00:12:29.551 "uuid": "d40c4ce2-7fdd-4d1f-bd77-f9a723958b8e", 00:12:29.551 "is_configured": true, 00:12:29.551 "data_offset": 0, 00:12:29.551 "data_size": 65536 00:12:29.551 } 00:12:29.551 ] 00:12:29.551 }' 00:12:29.551 03:15:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.551 03:15:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.810 03:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:29.810 03:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.810 03:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.810 03:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.810 03:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:29.810 03:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:29.810 03:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:29.810 03:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:29.810 03:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.810 [2024-10-09 03:15:13.046211] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:30.082 03:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.082 03:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:30.082 03:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:30.082 03:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:30.082 03:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.082 03:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.082 03:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:30.082 03:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.082 03:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.082 03:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.082 03:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.082 03:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.082 03:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.082 03:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.082 03:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:30.082 03:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.082 03:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.082 "name": "Existed_Raid", 00:12:30.082 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.082 "strip_size_kb": 0, 00:12:30.082 "state": "configuring", 00:12:30.082 "raid_level": "raid1", 00:12:30.082 "superblock": false, 00:12:30.082 "num_base_bdevs": 4, 00:12:30.082 "num_base_bdevs_discovered": 2, 00:12:30.082 "num_base_bdevs_operational": 4, 00:12:30.082 "base_bdevs_list": [ 00:12:30.082 { 00:12:30.082 "name": null, 00:12:30.082 "uuid": "572a8f9b-1531-4318-9a62-340dbc9a8daf", 00:12:30.082 "is_configured": false, 00:12:30.082 "data_offset": 0, 00:12:30.082 "data_size": 65536 00:12:30.082 }, 00:12:30.082 { 00:12:30.082 "name": null, 00:12:30.082 "uuid": "028cda0d-9296-4804-b144-4ef1b2cb1d7f", 00:12:30.082 "is_configured": false, 00:12:30.082 "data_offset": 0, 00:12:30.082 "data_size": 65536 00:12:30.082 }, 00:12:30.082 { 00:12:30.082 "name": "BaseBdev3", 00:12:30.082 "uuid": "106c7c94-baaf-4483-8b6a-4196bf33bc8d", 00:12:30.082 "is_configured": true, 00:12:30.082 "data_offset": 0, 00:12:30.082 "data_size": 65536 00:12:30.082 }, 00:12:30.082 { 00:12:30.082 "name": "BaseBdev4", 00:12:30.082 "uuid": "d40c4ce2-7fdd-4d1f-bd77-f9a723958b8e", 00:12:30.082 "is_configured": true, 00:12:30.082 "data_offset": 0, 00:12:30.082 "data_size": 65536 00:12:30.082 } 00:12:30.082 ] 00:12:30.082 }' 00:12:30.082 03:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.082 03:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.357 03:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.357 03:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:30.357 03:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.357 03:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.357 03:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.357 03:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:30.357 03:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:30.357 03:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.357 03:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.357 [2024-10-09 03:15:13.609119] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:30.357 03:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.357 03:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:30.357 03:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:30.357 03:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:30.357 03:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:30.357 03:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:30.357 03:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:30.357 03:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:30.357 03:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:30.357 03:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:30.357 03:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:30.357 03:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.357 03:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.357 03:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.357 03:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:30.357 03:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.617 03:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:30.617 "name": "Existed_Raid", 00:12:30.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.617 "strip_size_kb": 0, 00:12:30.617 "state": "configuring", 00:12:30.617 "raid_level": "raid1", 00:12:30.617 "superblock": false, 00:12:30.617 "num_base_bdevs": 4, 00:12:30.617 "num_base_bdevs_discovered": 3, 00:12:30.617 "num_base_bdevs_operational": 4, 00:12:30.617 "base_bdevs_list": [ 00:12:30.617 { 00:12:30.617 "name": null, 00:12:30.617 "uuid": "572a8f9b-1531-4318-9a62-340dbc9a8daf", 00:12:30.617 "is_configured": false, 00:12:30.617 "data_offset": 0, 00:12:30.617 "data_size": 65536 00:12:30.617 }, 00:12:30.617 { 00:12:30.617 "name": "BaseBdev2", 00:12:30.617 "uuid": "028cda0d-9296-4804-b144-4ef1b2cb1d7f", 00:12:30.617 "is_configured": true, 00:12:30.617 "data_offset": 0, 00:12:30.617 "data_size": 65536 00:12:30.617 }, 00:12:30.617 { 00:12:30.617 "name": "BaseBdev3", 00:12:30.617 "uuid": "106c7c94-baaf-4483-8b6a-4196bf33bc8d", 00:12:30.617 "is_configured": true, 00:12:30.617 "data_offset": 0, 00:12:30.617 "data_size": 65536 00:12:30.617 }, 00:12:30.617 { 00:12:30.617 "name": "BaseBdev4", 00:12:30.617 "uuid": "d40c4ce2-7fdd-4d1f-bd77-f9a723958b8e", 00:12:30.617 "is_configured": true, 00:12:30.617 "data_offset": 0, 00:12:30.617 "data_size": 65536 00:12:30.617 } 00:12:30.617 ] 00:12:30.617 }' 00:12:30.617 03:15:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:30.618 03:15:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.877 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.877 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.877 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.877 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:30.877 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.877 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:30.877 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:30.877 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.877 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.877 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:30.877 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:30.877 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 572a8f9b-1531-4318-9a62-340dbc9a8daf 00:12:30.877 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:30.877 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.137 [2024-10-09 03:15:14.189784] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:31.137 [2024-10-09 03:15:14.189864] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:31.137 [2024-10-09 03:15:14.189881] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:12:31.137 [2024-10-09 03:15:14.190261] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:31.137 [2024-10-09 03:15:14.190484] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:31.137 [2024-10-09 03:15:14.190502] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:31.137 [2024-10-09 03:15:14.190800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:31.137 NewBaseBdev 00:12:31.137 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.137 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:31.137 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:31.137 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:31.137 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@901 -- # local i 00:12:31.137 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:31.137 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:31.137 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:31.137 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.137 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.137 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.137 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:31.137 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.137 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.137 [ 00:12:31.137 { 00:12:31.137 "name": "NewBaseBdev", 00:12:31.137 "aliases": [ 00:12:31.137 "572a8f9b-1531-4318-9a62-340dbc9a8daf" 00:12:31.137 ], 00:12:31.137 "product_name": "Malloc disk", 00:12:31.137 "block_size": 512, 00:12:31.137 "num_blocks": 65536, 00:12:31.137 "uuid": "572a8f9b-1531-4318-9a62-340dbc9a8daf", 00:12:31.137 "assigned_rate_limits": { 00:12:31.137 "rw_ios_per_sec": 0, 00:12:31.137 "rw_mbytes_per_sec": 0, 00:12:31.137 "r_mbytes_per_sec": 0, 00:12:31.137 "w_mbytes_per_sec": 0 00:12:31.137 }, 00:12:31.137 "claimed": true, 00:12:31.137 "claim_type": "exclusive_write", 00:12:31.137 "zoned": false, 00:12:31.137 "supported_io_types": { 00:12:31.137 "read": true, 00:12:31.137 "write": true, 00:12:31.137 "unmap": true, 00:12:31.137 "flush": true, 00:12:31.137 "reset": true, 00:12:31.137 "nvme_admin": false, 00:12:31.137 "nvme_io": false, 00:12:31.137 "nvme_io_md": false, 00:12:31.137 "write_zeroes": true, 00:12:31.137 "zcopy": true, 00:12:31.137 "get_zone_info": false, 00:12:31.137 "zone_management": false, 00:12:31.137 "zone_append": false, 00:12:31.137 "compare": false, 00:12:31.137 "compare_and_write": false, 00:12:31.137 "abort": true, 00:12:31.137 "seek_hole": false, 00:12:31.137 "seek_data": false, 00:12:31.137 "copy": true, 00:12:31.137 "nvme_iov_md": false 00:12:31.137 }, 00:12:31.137 "memory_domains": [ 00:12:31.137 { 00:12:31.137 "dma_device_id": "system", 00:12:31.137 "dma_device_type": 1 00:12:31.137 }, 00:12:31.137 { 00:12:31.137 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.137 "dma_device_type": 2 00:12:31.137 } 00:12:31.137 ], 00:12:31.137 "driver_specific": {} 00:12:31.137 } 00:12:31.137 ] 00:12:31.137 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.137 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:12:31.137 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:31.137 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:31.137 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:31.137 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:31.137 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:31.137 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:31.137 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:31.137 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:31.137 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:31.137 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:31.137 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:31.137 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.137 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:31.137 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.137 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.137 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:31.137 "name": "Existed_Raid", 00:12:31.137 "uuid": "2918542c-338e-4664-9e63-0c79e21e632c", 00:12:31.137 "strip_size_kb": 0, 00:12:31.137 "state": "online", 00:12:31.137 "raid_level": "raid1", 00:12:31.137 "superblock": false, 00:12:31.137 "num_base_bdevs": 4, 00:12:31.137 "num_base_bdevs_discovered": 4, 00:12:31.138 "num_base_bdevs_operational": 4, 00:12:31.138 "base_bdevs_list": [ 00:12:31.138 { 00:12:31.138 "name": "NewBaseBdev", 00:12:31.138 "uuid": "572a8f9b-1531-4318-9a62-340dbc9a8daf", 00:12:31.138 "is_configured": true, 00:12:31.138 "data_offset": 0, 00:12:31.138 "data_size": 65536 00:12:31.138 }, 00:12:31.138 { 00:12:31.138 "name": "BaseBdev2", 00:12:31.138 "uuid": "028cda0d-9296-4804-b144-4ef1b2cb1d7f", 00:12:31.138 "is_configured": true, 00:12:31.138 "data_offset": 0, 00:12:31.138 "data_size": 65536 00:12:31.138 }, 00:12:31.138 { 00:12:31.138 "name": "BaseBdev3", 00:12:31.138 "uuid": "106c7c94-baaf-4483-8b6a-4196bf33bc8d", 00:12:31.138 "is_configured": true, 00:12:31.138 "data_offset": 0, 00:12:31.138 "data_size": 65536 00:12:31.138 }, 00:12:31.138 { 00:12:31.138 "name": "BaseBdev4", 00:12:31.138 "uuid": "d40c4ce2-7fdd-4d1f-bd77-f9a723958b8e", 00:12:31.138 "is_configured": true, 00:12:31.138 "data_offset": 0, 00:12:31.138 "data_size": 65536 00:12:31.138 } 00:12:31.138 ] 00:12:31.138 }' 00:12:31.138 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:31.138 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.398 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:31.398 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:31.398 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:31.398 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:31.398 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:31.398 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:31.398 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:31.398 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:31.398 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.398 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.398 [2024-10-09 03:15:14.641487] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:31.398 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.398 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:31.398 "name": "Existed_Raid", 00:12:31.398 "aliases": [ 00:12:31.398 "2918542c-338e-4664-9e63-0c79e21e632c" 00:12:31.398 ], 00:12:31.398 "product_name": "Raid Volume", 00:12:31.398 "block_size": 512, 00:12:31.398 "num_blocks": 65536, 00:12:31.398 "uuid": "2918542c-338e-4664-9e63-0c79e21e632c", 00:12:31.398 "assigned_rate_limits": { 00:12:31.398 "rw_ios_per_sec": 0, 00:12:31.398 "rw_mbytes_per_sec": 0, 00:12:31.398 "r_mbytes_per_sec": 0, 00:12:31.398 "w_mbytes_per_sec": 0 00:12:31.398 }, 00:12:31.398 "claimed": false, 00:12:31.398 "zoned": false, 00:12:31.398 "supported_io_types": { 00:12:31.398 "read": true, 00:12:31.398 "write": true, 00:12:31.398 "unmap": false, 00:12:31.398 "flush": false, 00:12:31.398 "reset": true, 00:12:31.398 "nvme_admin": false, 00:12:31.398 "nvme_io": false, 00:12:31.398 "nvme_io_md": false, 00:12:31.398 "write_zeroes": true, 00:12:31.398 "zcopy": false, 00:12:31.398 "get_zone_info": false, 00:12:31.398 "zone_management": false, 00:12:31.398 "zone_append": false, 00:12:31.398 "compare": false, 00:12:31.398 "compare_and_write": false, 00:12:31.398 "abort": false, 00:12:31.398 "seek_hole": false, 00:12:31.398 "seek_data": false, 00:12:31.398 "copy": false, 00:12:31.398 "nvme_iov_md": false 00:12:31.398 }, 00:12:31.398 "memory_domains": [ 00:12:31.398 { 00:12:31.398 "dma_device_id": "system", 00:12:31.398 "dma_device_type": 1 00:12:31.398 }, 00:12:31.398 { 00:12:31.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.398 "dma_device_type": 2 00:12:31.398 }, 00:12:31.398 { 00:12:31.398 "dma_device_id": "system", 00:12:31.398 "dma_device_type": 1 00:12:31.398 }, 00:12:31.398 { 00:12:31.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.398 "dma_device_type": 2 00:12:31.398 }, 00:12:31.398 { 00:12:31.398 "dma_device_id": "system", 00:12:31.398 "dma_device_type": 1 00:12:31.398 }, 00:12:31.398 { 00:12:31.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.398 "dma_device_type": 2 00:12:31.398 }, 00:12:31.398 { 00:12:31.398 "dma_device_id": "system", 00:12:31.398 "dma_device_type": 1 00:12:31.398 }, 00:12:31.398 { 00:12:31.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:31.398 "dma_device_type": 2 00:12:31.398 } 00:12:31.398 ], 00:12:31.398 "driver_specific": { 00:12:31.398 "raid": { 00:12:31.398 "uuid": "2918542c-338e-4664-9e63-0c79e21e632c", 00:12:31.398 "strip_size_kb": 0, 00:12:31.398 "state": "online", 00:12:31.398 "raid_level": "raid1", 00:12:31.398 "superblock": false, 00:12:31.398 "num_base_bdevs": 4, 00:12:31.398 "num_base_bdevs_discovered": 4, 00:12:31.398 "num_base_bdevs_operational": 4, 00:12:31.398 "base_bdevs_list": [ 00:12:31.398 { 00:12:31.398 "name": "NewBaseBdev", 00:12:31.398 "uuid": "572a8f9b-1531-4318-9a62-340dbc9a8daf", 00:12:31.398 "is_configured": true, 00:12:31.398 "data_offset": 0, 00:12:31.398 "data_size": 65536 00:12:31.398 }, 00:12:31.398 { 00:12:31.398 "name": "BaseBdev2", 00:12:31.398 "uuid": "028cda0d-9296-4804-b144-4ef1b2cb1d7f", 00:12:31.398 "is_configured": true, 00:12:31.398 "data_offset": 0, 00:12:31.398 "data_size": 65536 00:12:31.398 }, 00:12:31.398 { 00:12:31.398 "name": "BaseBdev3", 00:12:31.398 "uuid": "106c7c94-baaf-4483-8b6a-4196bf33bc8d", 00:12:31.398 "is_configured": true, 00:12:31.398 "data_offset": 0, 00:12:31.398 "data_size": 65536 00:12:31.398 }, 00:12:31.398 { 00:12:31.398 "name": "BaseBdev4", 00:12:31.398 "uuid": "d40c4ce2-7fdd-4d1f-bd77-f9a723958b8e", 00:12:31.398 "is_configured": true, 00:12:31.398 "data_offset": 0, 00:12:31.398 "data_size": 65536 00:12:31.398 } 00:12:31.398 ] 00:12:31.398 } 00:12:31.398 } 00:12:31.398 }' 00:12:31.398 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:31.658 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:31.658 BaseBdev2 00:12:31.658 BaseBdev3 00:12:31.658 BaseBdev4' 00:12:31.658 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:31.658 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:31.658 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:31.658 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:31.658 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.658 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.658 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:31.658 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.658 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:31.658 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:31.658 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:31.658 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:31.658 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.658 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.658 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:31.658 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.658 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:31.658 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:31.658 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:31.658 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:31.659 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:31.659 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.659 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.659 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.659 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:31.659 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:31.659 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:31.659 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:31.659 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:31.659 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.659 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.659 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.659 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:31.659 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:31.659 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:31.659 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:31.659 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.659 [2024-10-09 03:15:14.956574] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:31.659 [2024-10-09 03:15:14.956613] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:31.659 [2024-10-09 03:15:14.956712] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:31.659 [2024-10-09 03:15:14.957066] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:31.659 [2024-10-09 03:15:14.957097] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:31.919 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:31.919 03:15:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73361 00:12:31.919 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 73361 ']' 00:12:31.919 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # kill -0 73361 00:12:31.919 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # uname 00:12:31.919 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:31.919 03:15:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 73361 00:12:31.919 killing process with pid 73361 00:12:31.919 03:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:31.919 03:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:31.919 03:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 73361' 00:12:31.919 03:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@969 -- # kill 73361 00:12:31.919 [2024-10-09 03:15:15.005281] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:31.919 03:15:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@974 -- # wait 73361 00:12:32.179 [2024-10-09 03:15:15.447418] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:34.088 ************************************ 00:12:34.088 END TEST raid_state_function_test 00:12:34.088 ************************************ 00:12:34.088 03:15:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:34.088 00:12:34.088 real 0m12.067s 00:12:34.088 user 0m18.595s 00:12:34.088 sys 0m2.266s 00:12:34.088 03:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:34.088 03:15:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.088 03:15:17 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:12:34.088 03:15:17 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:34.088 03:15:17 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:34.088 03:15:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:34.088 ************************************ 00:12:34.088 START TEST raid_state_function_test_sb 00:12:34.088 ************************************ 00:12:34.088 03:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 4 true 00:12:34.088 03:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:12:34.088 03:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:34.088 03:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:34.088 03:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:34.088 03:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:34.088 03:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:34.088 03:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:34.088 03:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:34.088 03:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:34.088 03:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:34.088 03:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:34.088 03:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:34.088 03:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:34.088 03:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:34.088 03:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:34.089 03:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:34.089 03:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:34.089 03:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:34.089 03:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:34.089 03:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:34.089 03:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:34.089 03:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:34.089 03:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:34.089 03:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:34.089 03:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:12:34.089 03:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:12:34.089 03:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:34.089 03:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:34.089 03:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=74038 00:12:34.089 03:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:34.089 03:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 74038' 00:12:34.089 Process raid pid: 74038 00:12:34.089 03:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 74038 00:12:34.089 03:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 74038 ']' 00:12:34.089 03:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.089 03:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:34.089 03:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.089 03:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:34.089 03:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.089 [2024-10-09 03:15:17.134803] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:12:34.089 [2024-10-09 03:15:17.134927] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.089 [2024-10-09 03:15:17.282924] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:34.348 [2024-10-09 03:15:17.561071] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.607 [2024-10-09 03:15:17.860605] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:34.607 [2024-10-09 03:15:17.860662] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:34.867 03:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:34.867 03:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:12:34.867 03:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:34.867 03:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.867 03:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.867 [2024-10-09 03:15:18.063217] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:34.867 [2024-10-09 03:15:18.063276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:34.867 [2024-10-09 03:15:18.063294] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:34.867 [2024-10-09 03:15:18.063306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:34.867 [2024-10-09 03:15:18.063313] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:34.867 [2024-10-09 03:15:18.063323] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:34.867 [2024-10-09 03:15:18.063330] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:34.867 [2024-10-09 03:15:18.063342] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:34.867 03:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.867 03:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:34.867 03:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:34.867 03:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:34.867 03:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:34.867 03:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:34.867 03:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.867 03:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.867 03:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.867 03:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.867 03:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.867 03:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.867 03:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:34.867 03:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:34.867 03:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:34.867 03:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:34.867 03:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.867 "name": "Existed_Raid", 00:12:34.867 "uuid": "93201b63-d27f-45a8-881a-04f6739bca65", 00:12:34.867 "strip_size_kb": 0, 00:12:34.867 "state": "configuring", 00:12:34.867 "raid_level": "raid1", 00:12:34.867 "superblock": true, 00:12:34.867 "num_base_bdevs": 4, 00:12:34.867 "num_base_bdevs_discovered": 0, 00:12:34.867 "num_base_bdevs_operational": 4, 00:12:34.867 "base_bdevs_list": [ 00:12:34.867 { 00:12:34.867 "name": "BaseBdev1", 00:12:34.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.867 "is_configured": false, 00:12:34.867 "data_offset": 0, 00:12:34.867 "data_size": 0 00:12:34.867 }, 00:12:34.867 { 00:12:34.867 "name": "BaseBdev2", 00:12:34.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.867 "is_configured": false, 00:12:34.867 "data_offset": 0, 00:12:34.867 "data_size": 0 00:12:34.867 }, 00:12:34.867 { 00:12:34.867 "name": "BaseBdev3", 00:12:34.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.867 "is_configured": false, 00:12:34.867 "data_offset": 0, 00:12:34.867 "data_size": 0 00:12:34.867 }, 00:12:34.867 { 00:12:34.867 "name": "BaseBdev4", 00:12:34.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.867 "is_configured": false, 00:12:34.867 "data_offset": 0, 00:12:34.867 "data_size": 0 00:12:34.867 } 00:12:34.867 ] 00:12:34.867 }' 00:12:34.867 03:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.867 03:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.435 03:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:35.435 03:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.435 03:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.435 [2024-10-09 03:15:18.510332] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:35.435 [2024-10-09 03:15:18.510384] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:35.435 03:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.435 03:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:35.435 03:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.435 03:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.435 [2024-10-09 03:15:18.522348] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:35.435 [2024-10-09 03:15:18.522394] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:35.435 [2024-10-09 03:15:18.522404] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:35.435 [2024-10-09 03:15:18.522415] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:35.435 [2024-10-09 03:15:18.522421] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:35.435 [2024-10-09 03:15:18.522431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:35.435 [2024-10-09 03:15:18.522438] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:35.435 [2024-10-09 03:15:18.522448] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:35.435 03:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.435 03:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:35.435 03:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.435 03:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.435 [2024-10-09 03:15:18.589170] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:35.435 BaseBdev1 00:12:35.435 03:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.435 03:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:35.435 03:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:35.435 03:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:35.435 03:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:35.435 03:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:35.435 03:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:35.435 03:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:35.435 03:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.435 03:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.435 03:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.435 03:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:35.435 03:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.435 03:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.435 [ 00:12:35.435 { 00:12:35.435 "name": "BaseBdev1", 00:12:35.435 "aliases": [ 00:12:35.435 "762655f2-c693-429d-b572-d7c32c735dd3" 00:12:35.435 ], 00:12:35.435 "product_name": "Malloc disk", 00:12:35.435 "block_size": 512, 00:12:35.435 "num_blocks": 65536, 00:12:35.435 "uuid": "762655f2-c693-429d-b572-d7c32c735dd3", 00:12:35.435 "assigned_rate_limits": { 00:12:35.435 "rw_ios_per_sec": 0, 00:12:35.435 "rw_mbytes_per_sec": 0, 00:12:35.435 "r_mbytes_per_sec": 0, 00:12:35.435 "w_mbytes_per_sec": 0 00:12:35.435 }, 00:12:35.435 "claimed": true, 00:12:35.435 "claim_type": "exclusive_write", 00:12:35.435 "zoned": false, 00:12:35.435 "supported_io_types": { 00:12:35.435 "read": true, 00:12:35.435 "write": true, 00:12:35.435 "unmap": true, 00:12:35.435 "flush": true, 00:12:35.435 "reset": true, 00:12:35.435 "nvme_admin": false, 00:12:35.435 "nvme_io": false, 00:12:35.436 "nvme_io_md": false, 00:12:35.436 "write_zeroes": true, 00:12:35.436 "zcopy": true, 00:12:35.436 "get_zone_info": false, 00:12:35.436 "zone_management": false, 00:12:35.436 "zone_append": false, 00:12:35.436 "compare": false, 00:12:35.436 "compare_and_write": false, 00:12:35.436 "abort": true, 00:12:35.436 "seek_hole": false, 00:12:35.436 "seek_data": false, 00:12:35.436 "copy": true, 00:12:35.436 "nvme_iov_md": false 00:12:35.436 }, 00:12:35.436 "memory_domains": [ 00:12:35.436 { 00:12:35.436 "dma_device_id": "system", 00:12:35.436 "dma_device_type": 1 00:12:35.436 }, 00:12:35.436 { 00:12:35.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:35.436 "dma_device_type": 2 00:12:35.436 } 00:12:35.436 ], 00:12:35.436 "driver_specific": {} 00:12:35.436 } 00:12:35.436 ] 00:12:35.436 03:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.436 03:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:35.436 03:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:35.436 03:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:35.436 03:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:35.436 03:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:35.436 03:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:35.436 03:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:35.436 03:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:35.436 03:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:35.436 03:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:35.436 03:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:35.436 03:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:35.436 03:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:35.436 03:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:35.436 03:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:35.436 03:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:35.436 03:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:35.436 "name": "Existed_Raid", 00:12:35.436 "uuid": "7267e2b7-0a2d-4c80-be4a-da95fd2d6a02", 00:12:35.436 "strip_size_kb": 0, 00:12:35.436 "state": "configuring", 00:12:35.436 "raid_level": "raid1", 00:12:35.436 "superblock": true, 00:12:35.436 "num_base_bdevs": 4, 00:12:35.436 "num_base_bdevs_discovered": 1, 00:12:35.436 "num_base_bdevs_operational": 4, 00:12:35.436 "base_bdevs_list": [ 00:12:35.436 { 00:12:35.436 "name": "BaseBdev1", 00:12:35.436 "uuid": "762655f2-c693-429d-b572-d7c32c735dd3", 00:12:35.436 "is_configured": true, 00:12:35.436 "data_offset": 2048, 00:12:35.436 "data_size": 63488 00:12:35.436 }, 00:12:35.436 { 00:12:35.436 "name": "BaseBdev2", 00:12:35.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.436 "is_configured": false, 00:12:35.436 "data_offset": 0, 00:12:35.436 "data_size": 0 00:12:35.436 }, 00:12:35.436 { 00:12:35.436 "name": "BaseBdev3", 00:12:35.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.436 "is_configured": false, 00:12:35.436 "data_offset": 0, 00:12:35.436 "data_size": 0 00:12:35.436 }, 00:12:35.436 { 00:12:35.436 "name": "BaseBdev4", 00:12:35.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.436 "is_configured": false, 00:12:35.436 "data_offset": 0, 00:12:35.436 "data_size": 0 00:12:35.436 } 00:12:35.436 ] 00:12:35.436 }' 00:12:35.436 03:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:35.436 03:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.012 03:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:36.012 03:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.012 03:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.012 [2024-10-09 03:15:19.140310] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:36.012 [2024-10-09 03:15:19.140380] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:36.012 03:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.012 03:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:36.012 03:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.012 03:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.012 [2024-10-09 03:15:19.152320] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:36.012 [2024-10-09 03:15:19.154897] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:36.012 [2024-10-09 03:15:19.154946] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:36.012 [2024-10-09 03:15:19.154957] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:36.012 [2024-10-09 03:15:19.154969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:36.012 [2024-10-09 03:15:19.154976] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:36.012 [2024-10-09 03:15:19.154984] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:36.012 03:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.012 03:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:36.012 03:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:36.012 03:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:36.012 03:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:36.012 03:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:36.012 03:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.012 03:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.012 03:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:36.012 03:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.012 03:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.013 03:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.013 03:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.013 03:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.013 03:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.013 03:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.013 03:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.013 03:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.013 03:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.013 "name": "Existed_Raid", 00:12:36.013 "uuid": "3464a843-3254-4272-9ca0-22034beec747", 00:12:36.013 "strip_size_kb": 0, 00:12:36.013 "state": "configuring", 00:12:36.013 "raid_level": "raid1", 00:12:36.013 "superblock": true, 00:12:36.013 "num_base_bdevs": 4, 00:12:36.013 "num_base_bdevs_discovered": 1, 00:12:36.013 "num_base_bdevs_operational": 4, 00:12:36.013 "base_bdevs_list": [ 00:12:36.013 { 00:12:36.013 "name": "BaseBdev1", 00:12:36.013 "uuid": "762655f2-c693-429d-b572-d7c32c735dd3", 00:12:36.013 "is_configured": true, 00:12:36.013 "data_offset": 2048, 00:12:36.013 "data_size": 63488 00:12:36.013 }, 00:12:36.013 { 00:12:36.013 "name": "BaseBdev2", 00:12:36.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.013 "is_configured": false, 00:12:36.013 "data_offset": 0, 00:12:36.013 "data_size": 0 00:12:36.013 }, 00:12:36.013 { 00:12:36.013 "name": "BaseBdev3", 00:12:36.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.013 "is_configured": false, 00:12:36.013 "data_offset": 0, 00:12:36.013 "data_size": 0 00:12:36.013 }, 00:12:36.013 { 00:12:36.013 "name": "BaseBdev4", 00:12:36.013 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.013 "is_configured": false, 00:12:36.013 "data_offset": 0, 00:12:36.013 "data_size": 0 00:12:36.013 } 00:12:36.013 ] 00:12:36.013 }' 00:12:36.013 03:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.013 03:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.598 03:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:36.598 03:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.598 03:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.598 [2024-10-09 03:15:19.651998] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:36.598 BaseBdev2 00:12:36.598 03:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.598 03:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:36.598 03:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:36.598 03:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:36.598 03:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:36.598 03:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:36.598 03:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:36.598 03:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:36.598 03:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.598 03:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.598 03:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.598 03:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:36.598 03:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.598 03:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.598 [ 00:12:36.598 { 00:12:36.598 "name": "BaseBdev2", 00:12:36.598 "aliases": [ 00:12:36.598 "004f47bb-760e-4181-9728-87144cf69019" 00:12:36.598 ], 00:12:36.598 "product_name": "Malloc disk", 00:12:36.598 "block_size": 512, 00:12:36.598 "num_blocks": 65536, 00:12:36.598 "uuid": "004f47bb-760e-4181-9728-87144cf69019", 00:12:36.598 "assigned_rate_limits": { 00:12:36.598 "rw_ios_per_sec": 0, 00:12:36.598 "rw_mbytes_per_sec": 0, 00:12:36.598 "r_mbytes_per_sec": 0, 00:12:36.598 "w_mbytes_per_sec": 0 00:12:36.598 }, 00:12:36.598 "claimed": true, 00:12:36.598 "claim_type": "exclusive_write", 00:12:36.598 "zoned": false, 00:12:36.598 "supported_io_types": { 00:12:36.598 "read": true, 00:12:36.598 "write": true, 00:12:36.598 "unmap": true, 00:12:36.598 "flush": true, 00:12:36.598 "reset": true, 00:12:36.598 "nvme_admin": false, 00:12:36.598 "nvme_io": false, 00:12:36.598 "nvme_io_md": false, 00:12:36.598 "write_zeroes": true, 00:12:36.598 "zcopy": true, 00:12:36.598 "get_zone_info": false, 00:12:36.598 "zone_management": false, 00:12:36.598 "zone_append": false, 00:12:36.598 "compare": false, 00:12:36.598 "compare_and_write": false, 00:12:36.598 "abort": true, 00:12:36.598 "seek_hole": false, 00:12:36.598 "seek_data": false, 00:12:36.598 "copy": true, 00:12:36.598 "nvme_iov_md": false 00:12:36.598 }, 00:12:36.598 "memory_domains": [ 00:12:36.598 { 00:12:36.598 "dma_device_id": "system", 00:12:36.598 "dma_device_type": 1 00:12:36.598 }, 00:12:36.598 { 00:12:36.598 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:36.598 "dma_device_type": 2 00:12:36.598 } 00:12:36.598 ], 00:12:36.598 "driver_specific": {} 00:12:36.598 } 00:12:36.598 ] 00:12:36.598 03:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.598 03:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:36.598 03:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:36.598 03:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:36.598 03:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:36.598 03:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:36.598 03:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:36.598 03:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:36.598 03:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:36.598 03:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:36.598 03:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:36.598 03:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:36.599 03:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:36.599 03:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:36.599 03:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:36.599 03:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:36.599 03:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.599 03:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.599 03:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:36.599 03:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:36.599 "name": "Existed_Raid", 00:12:36.599 "uuid": "3464a843-3254-4272-9ca0-22034beec747", 00:12:36.599 "strip_size_kb": 0, 00:12:36.599 "state": "configuring", 00:12:36.599 "raid_level": "raid1", 00:12:36.599 "superblock": true, 00:12:36.599 "num_base_bdevs": 4, 00:12:36.599 "num_base_bdevs_discovered": 2, 00:12:36.599 "num_base_bdevs_operational": 4, 00:12:36.599 "base_bdevs_list": [ 00:12:36.599 { 00:12:36.599 "name": "BaseBdev1", 00:12:36.599 "uuid": "762655f2-c693-429d-b572-d7c32c735dd3", 00:12:36.599 "is_configured": true, 00:12:36.599 "data_offset": 2048, 00:12:36.599 "data_size": 63488 00:12:36.599 }, 00:12:36.599 { 00:12:36.599 "name": "BaseBdev2", 00:12:36.599 "uuid": "004f47bb-760e-4181-9728-87144cf69019", 00:12:36.599 "is_configured": true, 00:12:36.599 "data_offset": 2048, 00:12:36.599 "data_size": 63488 00:12:36.599 }, 00:12:36.599 { 00:12:36.599 "name": "BaseBdev3", 00:12:36.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.599 "is_configured": false, 00:12:36.599 "data_offset": 0, 00:12:36.599 "data_size": 0 00:12:36.599 }, 00:12:36.599 { 00:12:36.599 "name": "BaseBdev4", 00:12:36.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:36.599 "is_configured": false, 00:12:36.599 "data_offset": 0, 00:12:36.599 "data_size": 0 00:12:36.599 } 00:12:36.599 ] 00:12:36.599 }' 00:12:36.599 03:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:36.599 03:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:36.858 03:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:36.858 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:36.858 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.118 [2024-10-09 03:15:20.197028] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:37.118 BaseBdev3 00:12:37.118 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.118 03:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:37.118 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:37.118 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:37.118 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:37.118 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:37.118 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:37.118 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:37.118 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.118 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.118 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.118 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:37.118 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.118 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.118 [ 00:12:37.118 { 00:12:37.118 "name": "BaseBdev3", 00:12:37.118 "aliases": [ 00:12:37.118 "921a14d3-391e-4793-8289-e4c595b4f0bb" 00:12:37.118 ], 00:12:37.118 "product_name": "Malloc disk", 00:12:37.118 "block_size": 512, 00:12:37.118 "num_blocks": 65536, 00:12:37.118 "uuid": "921a14d3-391e-4793-8289-e4c595b4f0bb", 00:12:37.118 "assigned_rate_limits": { 00:12:37.118 "rw_ios_per_sec": 0, 00:12:37.118 "rw_mbytes_per_sec": 0, 00:12:37.118 "r_mbytes_per_sec": 0, 00:12:37.118 "w_mbytes_per_sec": 0 00:12:37.118 }, 00:12:37.118 "claimed": true, 00:12:37.118 "claim_type": "exclusive_write", 00:12:37.118 "zoned": false, 00:12:37.118 "supported_io_types": { 00:12:37.118 "read": true, 00:12:37.118 "write": true, 00:12:37.118 "unmap": true, 00:12:37.118 "flush": true, 00:12:37.118 "reset": true, 00:12:37.118 "nvme_admin": false, 00:12:37.118 "nvme_io": false, 00:12:37.118 "nvme_io_md": false, 00:12:37.118 "write_zeroes": true, 00:12:37.118 "zcopy": true, 00:12:37.118 "get_zone_info": false, 00:12:37.118 "zone_management": false, 00:12:37.118 "zone_append": false, 00:12:37.118 "compare": false, 00:12:37.118 "compare_and_write": false, 00:12:37.118 "abort": true, 00:12:37.118 "seek_hole": false, 00:12:37.118 "seek_data": false, 00:12:37.118 "copy": true, 00:12:37.118 "nvme_iov_md": false 00:12:37.118 }, 00:12:37.118 "memory_domains": [ 00:12:37.118 { 00:12:37.118 "dma_device_id": "system", 00:12:37.118 "dma_device_type": 1 00:12:37.118 }, 00:12:37.118 { 00:12:37.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.118 "dma_device_type": 2 00:12:37.118 } 00:12:37.118 ], 00:12:37.118 "driver_specific": {} 00:12:37.118 } 00:12:37.118 ] 00:12:37.118 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.118 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:37.118 03:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:37.118 03:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:37.118 03:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:37.118 03:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.118 03:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:37.118 03:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.118 03:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.118 03:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.118 03:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.118 03:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.118 03:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.118 03:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.118 03:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.118 03:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.118 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.118 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.118 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.118 03:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.118 "name": "Existed_Raid", 00:12:37.118 "uuid": "3464a843-3254-4272-9ca0-22034beec747", 00:12:37.118 "strip_size_kb": 0, 00:12:37.118 "state": "configuring", 00:12:37.118 "raid_level": "raid1", 00:12:37.118 "superblock": true, 00:12:37.118 "num_base_bdevs": 4, 00:12:37.118 "num_base_bdevs_discovered": 3, 00:12:37.118 "num_base_bdevs_operational": 4, 00:12:37.118 "base_bdevs_list": [ 00:12:37.118 { 00:12:37.118 "name": "BaseBdev1", 00:12:37.118 "uuid": "762655f2-c693-429d-b572-d7c32c735dd3", 00:12:37.118 "is_configured": true, 00:12:37.118 "data_offset": 2048, 00:12:37.118 "data_size": 63488 00:12:37.118 }, 00:12:37.118 { 00:12:37.118 "name": "BaseBdev2", 00:12:37.118 "uuid": "004f47bb-760e-4181-9728-87144cf69019", 00:12:37.118 "is_configured": true, 00:12:37.118 "data_offset": 2048, 00:12:37.118 "data_size": 63488 00:12:37.118 }, 00:12:37.118 { 00:12:37.118 "name": "BaseBdev3", 00:12:37.118 "uuid": "921a14d3-391e-4793-8289-e4c595b4f0bb", 00:12:37.118 "is_configured": true, 00:12:37.118 "data_offset": 2048, 00:12:37.119 "data_size": 63488 00:12:37.119 }, 00:12:37.119 { 00:12:37.119 "name": "BaseBdev4", 00:12:37.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:37.119 "is_configured": false, 00:12:37.119 "data_offset": 0, 00:12:37.119 "data_size": 0 00:12:37.119 } 00:12:37.119 ] 00:12:37.119 }' 00:12:37.119 03:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.119 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.687 03:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:37.687 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.687 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.687 [2024-10-09 03:15:20.753297] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:37.687 [2024-10-09 03:15:20.753716] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:37.687 [2024-10-09 03:15:20.753739] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:37.687 [2024-10-09 03:15:20.754186] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:37.687 [2024-10-09 03:15:20.754413] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:37.687 [2024-10-09 03:15:20.754450] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:37.687 BaseBdev4 00:12:37.687 [2024-10-09 03:15:20.754638] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:37.687 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.687 03:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:37.687 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:37.687 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:37.687 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:37.687 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:37.687 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:37.688 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:37.688 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.688 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.688 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.688 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:37.688 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.688 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.688 [ 00:12:37.688 { 00:12:37.688 "name": "BaseBdev4", 00:12:37.688 "aliases": [ 00:12:37.688 "c0a8d011-1a6c-4c44-bbc1-6528a8bab952" 00:12:37.688 ], 00:12:37.688 "product_name": "Malloc disk", 00:12:37.688 "block_size": 512, 00:12:37.688 "num_blocks": 65536, 00:12:37.688 "uuid": "c0a8d011-1a6c-4c44-bbc1-6528a8bab952", 00:12:37.688 "assigned_rate_limits": { 00:12:37.688 "rw_ios_per_sec": 0, 00:12:37.688 "rw_mbytes_per_sec": 0, 00:12:37.688 "r_mbytes_per_sec": 0, 00:12:37.688 "w_mbytes_per_sec": 0 00:12:37.688 }, 00:12:37.688 "claimed": true, 00:12:37.688 "claim_type": "exclusive_write", 00:12:37.688 "zoned": false, 00:12:37.688 "supported_io_types": { 00:12:37.688 "read": true, 00:12:37.688 "write": true, 00:12:37.688 "unmap": true, 00:12:37.688 "flush": true, 00:12:37.688 "reset": true, 00:12:37.688 "nvme_admin": false, 00:12:37.688 "nvme_io": false, 00:12:37.688 "nvme_io_md": false, 00:12:37.688 "write_zeroes": true, 00:12:37.688 "zcopy": true, 00:12:37.688 "get_zone_info": false, 00:12:37.688 "zone_management": false, 00:12:37.688 "zone_append": false, 00:12:37.688 "compare": false, 00:12:37.688 "compare_and_write": false, 00:12:37.688 "abort": true, 00:12:37.688 "seek_hole": false, 00:12:37.688 "seek_data": false, 00:12:37.688 "copy": true, 00:12:37.688 "nvme_iov_md": false 00:12:37.688 }, 00:12:37.688 "memory_domains": [ 00:12:37.688 { 00:12:37.688 "dma_device_id": "system", 00:12:37.688 "dma_device_type": 1 00:12:37.688 }, 00:12:37.688 { 00:12:37.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:37.688 "dma_device_type": 2 00:12:37.688 } 00:12:37.688 ], 00:12:37.688 "driver_specific": {} 00:12:37.688 } 00:12:37.688 ] 00:12:37.688 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.688 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:37.688 03:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:37.688 03:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:37.688 03:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:37.688 03:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:37.688 03:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:37.688 03:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:37.688 03:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:37.688 03:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:37.688 03:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:37.688 03:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:37.688 03:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:37.688 03:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:37.688 03:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:37.688 03:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:37.688 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:37.688 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:37.688 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:37.688 03:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:37.688 "name": "Existed_Raid", 00:12:37.688 "uuid": "3464a843-3254-4272-9ca0-22034beec747", 00:12:37.688 "strip_size_kb": 0, 00:12:37.688 "state": "online", 00:12:37.688 "raid_level": "raid1", 00:12:37.688 "superblock": true, 00:12:37.688 "num_base_bdevs": 4, 00:12:37.688 "num_base_bdevs_discovered": 4, 00:12:37.688 "num_base_bdevs_operational": 4, 00:12:37.688 "base_bdevs_list": [ 00:12:37.688 { 00:12:37.688 "name": "BaseBdev1", 00:12:37.688 "uuid": "762655f2-c693-429d-b572-d7c32c735dd3", 00:12:37.688 "is_configured": true, 00:12:37.688 "data_offset": 2048, 00:12:37.688 "data_size": 63488 00:12:37.688 }, 00:12:37.688 { 00:12:37.688 "name": "BaseBdev2", 00:12:37.688 "uuid": "004f47bb-760e-4181-9728-87144cf69019", 00:12:37.688 "is_configured": true, 00:12:37.688 "data_offset": 2048, 00:12:37.688 "data_size": 63488 00:12:37.688 }, 00:12:37.688 { 00:12:37.688 "name": "BaseBdev3", 00:12:37.688 "uuid": "921a14d3-391e-4793-8289-e4c595b4f0bb", 00:12:37.688 "is_configured": true, 00:12:37.688 "data_offset": 2048, 00:12:37.688 "data_size": 63488 00:12:37.688 }, 00:12:37.688 { 00:12:37.688 "name": "BaseBdev4", 00:12:37.688 "uuid": "c0a8d011-1a6c-4c44-bbc1-6528a8bab952", 00:12:37.688 "is_configured": true, 00:12:37.688 "data_offset": 2048, 00:12:37.688 "data_size": 63488 00:12:37.688 } 00:12:37.688 ] 00:12:37.688 }' 00:12:37.688 03:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:37.688 03:15:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:38.257 [2024-10-09 03:15:21.288868] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:38.257 "name": "Existed_Raid", 00:12:38.257 "aliases": [ 00:12:38.257 "3464a843-3254-4272-9ca0-22034beec747" 00:12:38.257 ], 00:12:38.257 "product_name": "Raid Volume", 00:12:38.257 "block_size": 512, 00:12:38.257 "num_blocks": 63488, 00:12:38.257 "uuid": "3464a843-3254-4272-9ca0-22034beec747", 00:12:38.257 "assigned_rate_limits": { 00:12:38.257 "rw_ios_per_sec": 0, 00:12:38.257 "rw_mbytes_per_sec": 0, 00:12:38.257 "r_mbytes_per_sec": 0, 00:12:38.257 "w_mbytes_per_sec": 0 00:12:38.257 }, 00:12:38.257 "claimed": false, 00:12:38.257 "zoned": false, 00:12:38.257 "supported_io_types": { 00:12:38.257 "read": true, 00:12:38.257 "write": true, 00:12:38.257 "unmap": false, 00:12:38.257 "flush": false, 00:12:38.257 "reset": true, 00:12:38.257 "nvme_admin": false, 00:12:38.257 "nvme_io": false, 00:12:38.257 "nvme_io_md": false, 00:12:38.257 "write_zeroes": true, 00:12:38.257 "zcopy": false, 00:12:38.257 "get_zone_info": false, 00:12:38.257 "zone_management": false, 00:12:38.257 "zone_append": false, 00:12:38.257 "compare": false, 00:12:38.257 "compare_and_write": false, 00:12:38.257 "abort": false, 00:12:38.257 "seek_hole": false, 00:12:38.257 "seek_data": false, 00:12:38.257 "copy": false, 00:12:38.257 "nvme_iov_md": false 00:12:38.257 }, 00:12:38.257 "memory_domains": [ 00:12:38.257 { 00:12:38.257 "dma_device_id": "system", 00:12:38.257 "dma_device_type": 1 00:12:38.257 }, 00:12:38.257 { 00:12:38.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.257 "dma_device_type": 2 00:12:38.257 }, 00:12:38.257 { 00:12:38.257 "dma_device_id": "system", 00:12:38.257 "dma_device_type": 1 00:12:38.257 }, 00:12:38.257 { 00:12:38.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.257 "dma_device_type": 2 00:12:38.257 }, 00:12:38.257 { 00:12:38.257 "dma_device_id": "system", 00:12:38.257 "dma_device_type": 1 00:12:38.257 }, 00:12:38.257 { 00:12:38.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.257 "dma_device_type": 2 00:12:38.257 }, 00:12:38.257 { 00:12:38.257 "dma_device_id": "system", 00:12:38.257 "dma_device_type": 1 00:12:38.257 }, 00:12:38.257 { 00:12:38.257 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.257 "dma_device_type": 2 00:12:38.257 } 00:12:38.257 ], 00:12:38.257 "driver_specific": { 00:12:38.257 "raid": { 00:12:38.257 "uuid": "3464a843-3254-4272-9ca0-22034beec747", 00:12:38.257 "strip_size_kb": 0, 00:12:38.257 "state": "online", 00:12:38.257 "raid_level": "raid1", 00:12:38.257 "superblock": true, 00:12:38.257 "num_base_bdevs": 4, 00:12:38.257 "num_base_bdevs_discovered": 4, 00:12:38.257 "num_base_bdevs_operational": 4, 00:12:38.257 "base_bdevs_list": [ 00:12:38.257 { 00:12:38.257 "name": "BaseBdev1", 00:12:38.257 "uuid": "762655f2-c693-429d-b572-d7c32c735dd3", 00:12:38.257 "is_configured": true, 00:12:38.257 "data_offset": 2048, 00:12:38.257 "data_size": 63488 00:12:38.257 }, 00:12:38.257 { 00:12:38.257 "name": "BaseBdev2", 00:12:38.257 "uuid": "004f47bb-760e-4181-9728-87144cf69019", 00:12:38.257 "is_configured": true, 00:12:38.257 "data_offset": 2048, 00:12:38.257 "data_size": 63488 00:12:38.257 }, 00:12:38.257 { 00:12:38.257 "name": "BaseBdev3", 00:12:38.257 "uuid": "921a14d3-391e-4793-8289-e4c595b4f0bb", 00:12:38.257 "is_configured": true, 00:12:38.257 "data_offset": 2048, 00:12:38.257 "data_size": 63488 00:12:38.257 }, 00:12:38.257 { 00:12:38.257 "name": "BaseBdev4", 00:12:38.257 "uuid": "c0a8d011-1a6c-4c44-bbc1-6528a8bab952", 00:12:38.257 "is_configured": true, 00:12:38.257 "data_offset": 2048, 00:12:38.257 "data_size": 63488 00:12:38.257 } 00:12:38.257 ] 00:12:38.257 } 00:12:38.257 } 00:12:38.257 }' 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:38.257 BaseBdev2 00:12:38.257 BaseBdev3 00:12:38.257 BaseBdev4' 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.257 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:38.517 03:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.517 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:38.517 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:38.517 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:38.517 03:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.517 03:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.517 [2024-10-09 03:15:21.608076] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:38.517 03:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.517 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:38.517 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:12:38.517 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:38.517 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:12:38.517 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:12:38.517 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:38.517 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.517 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:38.517 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:38.517 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:38.517 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:38.517 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.517 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.517 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.517 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.517 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.517 03:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:38.517 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.517 03:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:38.517 03:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:38.517 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.517 "name": "Existed_Raid", 00:12:38.517 "uuid": "3464a843-3254-4272-9ca0-22034beec747", 00:12:38.517 "strip_size_kb": 0, 00:12:38.517 "state": "online", 00:12:38.517 "raid_level": "raid1", 00:12:38.517 "superblock": true, 00:12:38.517 "num_base_bdevs": 4, 00:12:38.517 "num_base_bdevs_discovered": 3, 00:12:38.517 "num_base_bdevs_operational": 3, 00:12:38.517 "base_bdevs_list": [ 00:12:38.517 { 00:12:38.517 "name": null, 00:12:38.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.517 "is_configured": false, 00:12:38.517 "data_offset": 0, 00:12:38.517 "data_size": 63488 00:12:38.517 }, 00:12:38.517 { 00:12:38.517 "name": "BaseBdev2", 00:12:38.517 "uuid": "004f47bb-760e-4181-9728-87144cf69019", 00:12:38.517 "is_configured": true, 00:12:38.517 "data_offset": 2048, 00:12:38.517 "data_size": 63488 00:12:38.517 }, 00:12:38.517 { 00:12:38.517 "name": "BaseBdev3", 00:12:38.517 "uuid": "921a14d3-391e-4793-8289-e4c595b4f0bb", 00:12:38.517 "is_configured": true, 00:12:38.517 "data_offset": 2048, 00:12:38.517 "data_size": 63488 00:12:38.517 }, 00:12:38.517 { 00:12:38.517 "name": "BaseBdev4", 00:12:38.517 "uuid": "c0a8d011-1a6c-4c44-bbc1-6528a8bab952", 00:12:38.517 "is_configured": true, 00:12:38.517 "data_offset": 2048, 00:12:38.517 "data_size": 63488 00:12:38.517 } 00:12:38.517 ] 00:12:38.517 }' 00:12:38.517 03:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.517 03:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.085 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:39.085 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:39.085 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.085 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.085 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:39.085 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.085 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.085 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:39.085 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:39.085 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:39.085 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.085 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.085 [2024-10-09 03:15:22.191189] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:39.085 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.085 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:39.085 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:39.085 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:39.085 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.085 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.085 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.085 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.085 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:39.085 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:39.085 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:39.085 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.085 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.085 [2024-10-09 03:15:22.349807] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:39.343 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.343 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:39.343 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:39.343 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.343 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.343 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.343 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:39.343 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.343 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:39.343 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:39.343 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:39.343 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.343 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.343 [2024-10-09 03:15:22.531559] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:39.343 [2024-10-09 03:15:22.531685] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:39.343 [2024-10-09 03:15:22.641726] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:39.343 [2024-10-09 03:15:22.641802] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:39.343 [2024-10-09 03:15:22.641819] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:39.343 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.343 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:39.343 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.603 BaseBdev2 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.603 [ 00:12:39.603 { 00:12:39.603 "name": "BaseBdev2", 00:12:39.603 "aliases": [ 00:12:39.603 "e3a41273-5fa4-48d8-a1a4-4d60bb3547ca" 00:12:39.603 ], 00:12:39.603 "product_name": "Malloc disk", 00:12:39.603 "block_size": 512, 00:12:39.603 "num_blocks": 65536, 00:12:39.603 "uuid": "e3a41273-5fa4-48d8-a1a4-4d60bb3547ca", 00:12:39.603 "assigned_rate_limits": { 00:12:39.603 "rw_ios_per_sec": 0, 00:12:39.603 "rw_mbytes_per_sec": 0, 00:12:39.603 "r_mbytes_per_sec": 0, 00:12:39.603 "w_mbytes_per_sec": 0 00:12:39.603 }, 00:12:39.603 "claimed": false, 00:12:39.603 "zoned": false, 00:12:39.603 "supported_io_types": { 00:12:39.603 "read": true, 00:12:39.603 "write": true, 00:12:39.603 "unmap": true, 00:12:39.603 "flush": true, 00:12:39.603 "reset": true, 00:12:39.603 "nvme_admin": false, 00:12:39.603 "nvme_io": false, 00:12:39.603 "nvme_io_md": false, 00:12:39.603 "write_zeroes": true, 00:12:39.603 "zcopy": true, 00:12:39.603 "get_zone_info": false, 00:12:39.603 "zone_management": false, 00:12:39.603 "zone_append": false, 00:12:39.603 "compare": false, 00:12:39.603 "compare_and_write": false, 00:12:39.603 "abort": true, 00:12:39.603 "seek_hole": false, 00:12:39.603 "seek_data": false, 00:12:39.603 "copy": true, 00:12:39.603 "nvme_iov_md": false 00:12:39.603 }, 00:12:39.603 "memory_domains": [ 00:12:39.603 { 00:12:39.603 "dma_device_id": "system", 00:12:39.603 "dma_device_type": 1 00:12:39.603 }, 00:12:39.603 { 00:12:39.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.603 "dma_device_type": 2 00:12:39.603 } 00:12:39.603 ], 00:12:39.603 "driver_specific": {} 00:12:39.603 } 00:12:39.603 ] 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.603 BaseBdev3 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.603 [ 00:12:39.603 { 00:12:39.603 "name": "BaseBdev3", 00:12:39.603 "aliases": [ 00:12:39.603 "a8967dc5-b76b-438b-8cf9-c8940408c8a3" 00:12:39.603 ], 00:12:39.603 "product_name": "Malloc disk", 00:12:39.603 "block_size": 512, 00:12:39.603 "num_blocks": 65536, 00:12:39.603 "uuid": "a8967dc5-b76b-438b-8cf9-c8940408c8a3", 00:12:39.603 "assigned_rate_limits": { 00:12:39.603 "rw_ios_per_sec": 0, 00:12:39.603 "rw_mbytes_per_sec": 0, 00:12:39.603 "r_mbytes_per_sec": 0, 00:12:39.603 "w_mbytes_per_sec": 0 00:12:39.603 }, 00:12:39.603 "claimed": false, 00:12:39.603 "zoned": false, 00:12:39.603 "supported_io_types": { 00:12:39.603 "read": true, 00:12:39.603 "write": true, 00:12:39.603 "unmap": true, 00:12:39.603 "flush": true, 00:12:39.603 "reset": true, 00:12:39.603 "nvme_admin": false, 00:12:39.603 "nvme_io": false, 00:12:39.603 "nvme_io_md": false, 00:12:39.603 "write_zeroes": true, 00:12:39.603 "zcopy": true, 00:12:39.603 "get_zone_info": false, 00:12:39.603 "zone_management": false, 00:12:39.603 "zone_append": false, 00:12:39.603 "compare": false, 00:12:39.603 "compare_and_write": false, 00:12:39.603 "abort": true, 00:12:39.603 "seek_hole": false, 00:12:39.603 "seek_data": false, 00:12:39.603 "copy": true, 00:12:39.603 "nvme_iov_md": false 00:12:39.603 }, 00:12:39.603 "memory_domains": [ 00:12:39.603 { 00:12:39.603 "dma_device_id": "system", 00:12:39.603 "dma_device_type": 1 00:12:39.603 }, 00:12:39.603 { 00:12:39.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.603 "dma_device_type": 2 00:12:39.603 } 00:12:39.603 ], 00:12:39.603 "driver_specific": {} 00:12:39.603 } 00:12:39.603 ] 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.603 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.863 BaseBdev4 00:12:39.863 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.863 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:39.863 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:12:39.863 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:39.863 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:39.863 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:39.863 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:39.863 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:39.863 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.863 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.863 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.863 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:39.863 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.863 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.863 [ 00:12:39.863 { 00:12:39.863 "name": "BaseBdev4", 00:12:39.863 "aliases": [ 00:12:39.863 "397ec1c5-a81c-405b-8415-c1c805820fb2" 00:12:39.863 ], 00:12:39.863 "product_name": "Malloc disk", 00:12:39.863 "block_size": 512, 00:12:39.863 "num_blocks": 65536, 00:12:39.863 "uuid": "397ec1c5-a81c-405b-8415-c1c805820fb2", 00:12:39.863 "assigned_rate_limits": { 00:12:39.863 "rw_ios_per_sec": 0, 00:12:39.863 "rw_mbytes_per_sec": 0, 00:12:39.863 "r_mbytes_per_sec": 0, 00:12:39.863 "w_mbytes_per_sec": 0 00:12:39.863 }, 00:12:39.863 "claimed": false, 00:12:39.863 "zoned": false, 00:12:39.863 "supported_io_types": { 00:12:39.863 "read": true, 00:12:39.863 "write": true, 00:12:39.863 "unmap": true, 00:12:39.863 "flush": true, 00:12:39.863 "reset": true, 00:12:39.863 "nvme_admin": false, 00:12:39.863 "nvme_io": false, 00:12:39.863 "nvme_io_md": false, 00:12:39.863 "write_zeroes": true, 00:12:39.863 "zcopy": true, 00:12:39.863 "get_zone_info": false, 00:12:39.863 "zone_management": false, 00:12:39.863 "zone_append": false, 00:12:39.863 "compare": false, 00:12:39.863 "compare_and_write": false, 00:12:39.863 "abort": true, 00:12:39.863 "seek_hole": false, 00:12:39.863 "seek_data": false, 00:12:39.863 "copy": true, 00:12:39.863 "nvme_iov_md": false 00:12:39.863 }, 00:12:39.863 "memory_domains": [ 00:12:39.863 { 00:12:39.863 "dma_device_id": "system", 00:12:39.863 "dma_device_type": 1 00:12:39.863 }, 00:12:39.863 { 00:12:39.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.863 "dma_device_type": 2 00:12:39.863 } 00:12:39.863 ], 00:12:39.863 "driver_specific": {} 00:12:39.863 } 00:12:39.863 ] 00:12:39.863 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.863 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:39.863 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:39.863 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:39.863 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:39.863 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.863 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.863 [2024-10-09 03:15:22.956267] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:39.863 [2024-10-09 03:15:22.956375] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:39.863 [2024-10-09 03:15:22.956424] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:39.863 [2024-10-09 03:15:22.959012] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:39.863 [2024-10-09 03:15:22.959113] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:39.863 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.863 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:39.863 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:39.863 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:39.863 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:39.863 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:39.863 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:39.863 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.863 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.863 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.863 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.863 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.863 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:39.863 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:39.863 03:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.863 03:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:39.863 03:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.863 "name": "Existed_Raid", 00:12:39.863 "uuid": "bf8cf1df-b158-4039-abe1-19ecf3d684cb", 00:12:39.863 "strip_size_kb": 0, 00:12:39.863 "state": "configuring", 00:12:39.863 "raid_level": "raid1", 00:12:39.863 "superblock": true, 00:12:39.863 "num_base_bdevs": 4, 00:12:39.863 "num_base_bdevs_discovered": 3, 00:12:39.863 "num_base_bdevs_operational": 4, 00:12:39.863 "base_bdevs_list": [ 00:12:39.863 { 00:12:39.863 "name": "BaseBdev1", 00:12:39.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.863 "is_configured": false, 00:12:39.863 "data_offset": 0, 00:12:39.863 "data_size": 0 00:12:39.863 }, 00:12:39.863 { 00:12:39.863 "name": "BaseBdev2", 00:12:39.863 "uuid": "e3a41273-5fa4-48d8-a1a4-4d60bb3547ca", 00:12:39.863 "is_configured": true, 00:12:39.863 "data_offset": 2048, 00:12:39.863 "data_size": 63488 00:12:39.863 }, 00:12:39.863 { 00:12:39.863 "name": "BaseBdev3", 00:12:39.863 "uuid": "a8967dc5-b76b-438b-8cf9-c8940408c8a3", 00:12:39.864 "is_configured": true, 00:12:39.864 "data_offset": 2048, 00:12:39.864 "data_size": 63488 00:12:39.864 }, 00:12:39.864 { 00:12:39.864 "name": "BaseBdev4", 00:12:39.864 "uuid": "397ec1c5-a81c-405b-8415-c1c805820fb2", 00:12:39.864 "is_configured": true, 00:12:39.864 "data_offset": 2048, 00:12:39.864 "data_size": 63488 00:12:39.864 } 00:12:39.864 ] 00:12:39.864 }' 00:12:39.864 03:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.864 03:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.122 03:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:40.122 03:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.122 03:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.122 [2024-10-09 03:15:23.399594] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:40.122 03:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.122 03:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:40.122 03:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:40.122 03:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:40.122 03:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.122 03:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.122 03:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:40.122 03:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.122 03:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.122 03:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.122 03:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.122 03:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.122 03:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.122 03:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.122 03:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.122 03:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.381 03:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.381 "name": "Existed_Raid", 00:12:40.381 "uuid": "bf8cf1df-b158-4039-abe1-19ecf3d684cb", 00:12:40.381 "strip_size_kb": 0, 00:12:40.381 "state": "configuring", 00:12:40.381 "raid_level": "raid1", 00:12:40.381 "superblock": true, 00:12:40.381 "num_base_bdevs": 4, 00:12:40.381 "num_base_bdevs_discovered": 2, 00:12:40.381 "num_base_bdevs_operational": 4, 00:12:40.381 "base_bdevs_list": [ 00:12:40.381 { 00:12:40.381 "name": "BaseBdev1", 00:12:40.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.381 "is_configured": false, 00:12:40.381 "data_offset": 0, 00:12:40.381 "data_size": 0 00:12:40.381 }, 00:12:40.381 { 00:12:40.381 "name": null, 00:12:40.381 "uuid": "e3a41273-5fa4-48d8-a1a4-4d60bb3547ca", 00:12:40.381 "is_configured": false, 00:12:40.381 "data_offset": 0, 00:12:40.381 "data_size": 63488 00:12:40.381 }, 00:12:40.381 { 00:12:40.381 "name": "BaseBdev3", 00:12:40.381 "uuid": "a8967dc5-b76b-438b-8cf9-c8940408c8a3", 00:12:40.381 "is_configured": true, 00:12:40.381 "data_offset": 2048, 00:12:40.381 "data_size": 63488 00:12:40.381 }, 00:12:40.381 { 00:12:40.381 "name": "BaseBdev4", 00:12:40.381 "uuid": "397ec1c5-a81c-405b-8415-c1c805820fb2", 00:12:40.381 "is_configured": true, 00:12:40.381 "data_offset": 2048, 00:12:40.381 "data_size": 63488 00:12:40.381 } 00:12:40.381 ] 00:12:40.381 }' 00:12:40.381 03:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.381 03:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.640 03:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.640 03:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.640 03:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.640 03:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:40.640 03:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.640 03:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:40.640 03:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:40.640 03:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.640 03:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.640 [2024-10-09 03:15:23.889454] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:40.640 BaseBdev1 00:12:40.640 03:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.640 03:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:40.640 03:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:12:40.640 03:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:40.640 03:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:40.640 03:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:40.640 03:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:40.640 03:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:40.640 03:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.640 03:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.640 03:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.640 03:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:40.640 03:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.640 03:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.640 [ 00:12:40.640 { 00:12:40.640 "name": "BaseBdev1", 00:12:40.640 "aliases": [ 00:12:40.640 "34d42ef0-0f73-4ecb-90d6-e944f6bdb872" 00:12:40.640 ], 00:12:40.640 "product_name": "Malloc disk", 00:12:40.640 "block_size": 512, 00:12:40.640 "num_blocks": 65536, 00:12:40.640 "uuid": "34d42ef0-0f73-4ecb-90d6-e944f6bdb872", 00:12:40.640 "assigned_rate_limits": { 00:12:40.640 "rw_ios_per_sec": 0, 00:12:40.640 "rw_mbytes_per_sec": 0, 00:12:40.640 "r_mbytes_per_sec": 0, 00:12:40.640 "w_mbytes_per_sec": 0 00:12:40.640 }, 00:12:40.640 "claimed": true, 00:12:40.641 "claim_type": "exclusive_write", 00:12:40.641 "zoned": false, 00:12:40.641 "supported_io_types": { 00:12:40.641 "read": true, 00:12:40.641 "write": true, 00:12:40.641 "unmap": true, 00:12:40.641 "flush": true, 00:12:40.641 "reset": true, 00:12:40.641 "nvme_admin": false, 00:12:40.641 "nvme_io": false, 00:12:40.641 "nvme_io_md": false, 00:12:40.641 "write_zeroes": true, 00:12:40.641 "zcopy": true, 00:12:40.641 "get_zone_info": false, 00:12:40.641 "zone_management": false, 00:12:40.641 "zone_append": false, 00:12:40.641 "compare": false, 00:12:40.641 "compare_and_write": false, 00:12:40.641 "abort": true, 00:12:40.641 "seek_hole": false, 00:12:40.641 "seek_data": false, 00:12:40.641 "copy": true, 00:12:40.641 "nvme_iov_md": false 00:12:40.641 }, 00:12:40.641 "memory_domains": [ 00:12:40.641 { 00:12:40.641 "dma_device_id": "system", 00:12:40.641 "dma_device_type": 1 00:12:40.641 }, 00:12:40.641 { 00:12:40.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.641 "dma_device_type": 2 00:12:40.641 } 00:12:40.641 ], 00:12:40.641 "driver_specific": {} 00:12:40.641 } 00:12:40.641 ] 00:12:40.641 03:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.641 03:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:40.641 03:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:40.641 03:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:40.641 03:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:40.641 03:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:40.641 03:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:40.641 03:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:40.641 03:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.641 03:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.641 03:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.641 03:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.641 03:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.641 03:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.641 03:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:40.641 03:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:40.900 03:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:40.900 03:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.900 "name": "Existed_Raid", 00:12:40.900 "uuid": "bf8cf1df-b158-4039-abe1-19ecf3d684cb", 00:12:40.900 "strip_size_kb": 0, 00:12:40.900 "state": "configuring", 00:12:40.900 "raid_level": "raid1", 00:12:40.900 "superblock": true, 00:12:40.900 "num_base_bdevs": 4, 00:12:40.900 "num_base_bdevs_discovered": 3, 00:12:40.900 "num_base_bdevs_operational": 4, 00:12:40.900 "base_bdevs_list": [ 00:12:40.900 { 00:12:40.900 "name": "BaseBdev1", 00:12:40.900 "uuid": "34d42ef0-0f73-4ecb-90d6-e944f6bdb872", 00:12:40.900 "is_configured": true, 00:12:40.900 "data_offset": 2048, 00:12:40.900 "data_size": 63488 00:12:40.900 }, 00:12:40.900 { 00:12:40.900 "name": null, 00:12:40.900 "uuid": "e3a41273-5fa4-48d8-a1a4-4d60bb3547ca", 00:12:40.900 "is_configured": false, 00:12:40.900 "data_offset": 0, 00:12:40.900 "data_size": 63488 00:12:40.900 }, 00:12:40.900 { 00:12:40.900 "name": "BaseBdev3", 00:12:40.900 "uuid": "a8967dc5-b76b-438b-8cf9-c8940408c8a3", 00:12:40.900 "is_configured": true, 00:12:40.900 "data_offset": 2048, 00:12:40.900 "data_size": 63488 00:12:40.900 }, 00:12:40.900 { 00:12:40.900 "name": "BaseBdev4", 00:12:40.900 "uuid": "397ec1c5-a81c-405b-8415-c1c805820fb2", 00:12:40.900 "is_configured": true, 00:12:40.900 "data_offset": 2048, 00:12:40.900 "data_size": 63488 00:12:40.900 } 00:12:40.900 ] 00:12:40.900 }' 00:12:40.900 03:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.900 03:15:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.165 03:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:41.165 03:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.165 03:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.165 03:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.165 03:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.165 03:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:41.165 03:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:41.165 03:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.165 03:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.165 [2024-10-09 03:15:24.376863] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:41.165 03:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.165 03:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:41.165 03:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:41.165 03:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:41.165 03:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.165 03:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.165 03:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:41.165 03:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.165 03:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.165 03:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.165 03:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.165 03:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.165 03:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.165 03:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.165 03:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.165 03:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.165 03:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.165 "name": "Existed_Raid", 00:12:41.165 "uuid": "bf8cf1df-b158-4039-abe1-19ecf3d684cb", 00:12:41.165 "strip_size_kb": 0, 00:12:41.165 "state": "configuring", 00:12:41.165 "raid_level": "raid1", 00:12:41.165 "superblock": true, 00:12:41.165 "num_base_bdevs": 4, 00:12:41.165 "num_base_bdevs_discovered": 2, 00:12:41.165 "num_base_bdevs_operational": 4, 00:12:41.165 "base_bdevs_list": [ 00:12:41.165 { 00:12:41.165 "name": "BaseBdev1", 00:12:41.165 "uuid": "34d42ef0-0f73-4ecb-90d6-e944f6bdb872", 00:12:41.165 "is_configured": true, 00:12:41.165 "data_offset": 2048, 00:12:41.165 "data_size": 63488 00:12:41.165 }, 00:12:41.165 { 00:12:41.165 "name": null, 00:12:41.165 "uuid": "e3a41273-5fa4-48d8-a1a4-4d60bb3547ca", 00:12:41.165 "is_configured": false, 00:12:41.165 "data_offset": 0, 00:12:41.165 "data_size": 63488 00:12:41.165 }, 00:12:41.165 { 00:12:41.165 "name": null, 00:12:41.165 "uuid": "a8967dc5-b76b-438b-8cf9-c8940408c8a3", 00:12:41.165 "is_configured": false, 00:12:41.165 "data_offset": 0, 00:12:41.165 "data_size": 63488 00:12:41.165 }, 00:12:41.165 { 00:12:41.165 "name": "BaseBdev4", 00:12:41.165 "uuid": "397ec1c5-a81c-405b-8415-c1c805820fb2", 00:12:41.165 "is_configured": true, 00:12:41.165 "data_offset": 2048, 00:12:41.165 "data_size": 63488 00:12:41.165 } 00:12:41.165 ] 00:12:41.165 }' 00:12:41.165 03:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.165 03:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.746 03:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.746 03:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:41.746 03:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.746 03:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.746 03:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.746 03:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:41.746 03:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:41.746 03:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.746 03:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.746 [2024-10-09 03:15:24.904040] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:41.746 03:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.746 03:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:41.746 03:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:41.746 03:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:41.746 03:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:41.746 03:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:41.746 03:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:41.746 03:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.746 03:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.746 03:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.746 03:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.746 03:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.746 03:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.746 03:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:41.746 03:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:41.746 03:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:41.746 03:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.747 "name": "Existed_Raid", 00:12:41.747 "uuid": "bf8cf1df-b158-4039-abe1-19ecf3d684cb", 00:12:41.747 "strip_size_kb": 0, 00:12:41.747 "state": "configuring", 00:12:41.747 "raid_level": "raid1", 00:12:41.747 "superblock": true, 00:12:41.747 "num_base_bdevs": 4, 00:12:41.747 "num_base_bdevs_discovered": 3, 00:12:41.747 "num_base_bdevs_operational": 4, 00:12:41.747 "base_bdevs_list": [ 00:12:41.747 { 00:12:41.747 "name": "BaseBdev1", 00:12:41.747 "uuid": "34d42ef0-0f73-4ecb-90d6-e944f6bdb872", 00:12:41.747 "is_configured": true, 00:12:41.747 "data_offset": 2048, 00:12:41.747 "data_size": 63488 00:12:41.747 }, 00:12:41.747 { 00:12:41.747 "name": null, 00:12:41.747 "uuid": "e3a41273-5fa4-48d8-a1a4-4d60bb3547ca", 00:12:41.747 "is_configured": false, 00:12:41.747 "data_offset": 0, 00:12:41.747 "data_size": 63488 00:12:41.747 }, 00:12:41.747 { 00:12:41.747 "name": "BaseBdev3", 00:12:41.747 "uuid": "a8967dc5-b76b-438b-8cf9-c8940408c8a3", 00:12:41.747 "is_configured": true, 00:12:41.747 "data_offset": 2048, 00:12:41.747 "data_size": 63488 00:12:41.747 }, 00:12:41.747 { 00:12:41.747 "name": "BaseBdev4", 00:12:41.747 "uuid": "397ec1c5-a81c-405b-8415-c1c805820fb2", 00:12:41.747 "is_configured": true, 00:12:41.747 "data_offset": 2048, 00:12:41.747 "data_size": 63488 00:12:41.747 } 00:12:41.747 ] 00:12:41.747 }' 00:12:41.747 03:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.747 03:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.315 03:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.315 03:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:42.315 03:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.315 03:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.315 03:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.315 03:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:42.315 03:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:42.315 03:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.315 03:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.315 [2024-10-09 03:15:25.403249] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:42.315 03:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.315 03:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:42.315 03:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:42.315 03:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:42.315 03:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.315 03:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.315 03:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:42.315 03:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.315 03:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.315 03:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.315 03:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.315 03:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.315 03:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.315 03:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.315 03:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.315 03:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.315 03:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.315 "name": "Existed_Raid", 00:12:42.315 "uuid": "bf8cf1df-b158-4039-abe1-19ecf3d684cb", 00:12:42.315 "strip_size_kb": 0, 00:12:42.315 "state": "configuring", 00:12:42.315 "raid_level": "raid1", 00:12:42.315 "superblock": true, 00:12:42.315 "num_base_bdevs": 4, 00:12:42.315 "num_base_bdevs_discovered": 2, 00:12:42.315 "num_base_bdevs_operational": 4, 00:12:42.315 "base_bdevs_list": [ 00:12:42.315 { 00:12:42.315 "name": null, 00:12:42.315 "uuid": "34d42ef0-0f73-4ecb-90d6-e944f6bdb872", 00:12:42.315 "is_configured": false, 00:12:42.315 "data_offset": 0, 00:12:42.315 "data_size": 63488 00:12:42.315 }, 00:12:42.315 { 00:12:42.315 "name": null, 00:12:42.315 "uuid": "e3a41273-5fa4-48d8-a1a4-4d60bb3547ca", 00:12:42.315 "is_configured": false, 00:12:42.315 "data_offset": 0, 00:12:42.315 "data_size": 63488 00:12:42.315 }, 00:12:42.315 { 00:12:42.315 "name": "BaseBdev3", 00:12:42.315 "uuid": "a8967dc5-b76b-438b-8cf9-c8940408c8a3", 00:12:42.315 "is_configured": true, 00:12:42.315 "data_offset": 2048, 00:12:42.315 "data_size": 63488 00:12:42.315 }, 00:12:42.315 { 00:12:42.315 "name": "BaseBdev4", 00:12:42.315 "uuid": "397ec1c5-a81c-405b-8415-c1c805820fb2", 00:12:42.315 "is_configured": true, 00:12:42.315 "data_offset": 2048, 00:12:42.315 "data_size": 63488 00:12:42.315 } 00:12:42.315 ] 00:12:42.315 }' 00:12:42.315 03:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.315 03:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.883 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.883 03:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.883 03:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.883 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:42.883 03:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.883 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:42.883 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:42.883 03:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.883 03:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.883 [2024-10-09 03:15:26.060259] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:42.883 03:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.883 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:12:42.883 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:42.883 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:42.883 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:42.883 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:42.883 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:42.883 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:42.883 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:42.883 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:42.883 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:42.883 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:42.883 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.883 03:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:42.883 03:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:42.883 03:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:42.883 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:42.883 "name": "Existed_Raid", 00:12:42.883 "uuid": "bf8cf1df-b158-4039-abe1-19ecf3d684cb", 00:12:42.883 "strip_size_kb": 0, 00:12:42.883 "state": "configuring", 00:12:42.883 "raid_level": "raid1", 00:12:42.883 "superblock": true, 00:12:42.883 "num_base_bdevs": 4, 00:12:42.883 "num_base_bdevs_discovered": 3, 00:12:42.883 "num_base_bdevs_operational": 4, 00:12:42.883 "base_bdevs_list": [ 00:12:42.883 { 00:12:42.883 "name": null, 00:12:42.883 "uuid": "34d42ef0-0f73-4ecb-90d6-e944f6bdb872", 00:12:42.883 "is_configured": false, 00:12:42.883 "data_offset": 0, 00:12:42.883 "data_size": 63488 00:12:42.883 }, 00:12:42.883 { 00:12:42.883 "name": "BaseBdev2", 00:12:42.883 "uuid": "e3a41273-5fa4-48d8-a1a4-4d60bb3547ca", 00:12:42.883 "is_configured": true, 00:12:42.883 "data_offset": 2048, 00:12:42.883 "data_size": 63488 00:12:42.883 }, 00:12:42.883 { 00:12:42.883 "name": "BaseBdev3", 00:12:42.883 "uuid": "a8967dc5-b76b-438b-8cf9-c8940408c8a3", 00:12:42.883 "is_configured": true, 00:12:42.883 "data_offset": 2048, 00:12:42.883 "data_size": 63488 00:12:42.883 }, 00:12:42.883 { 00:12:42.883 "name": "BaseBdev4", 00:12:42.883 "uuid": "397ec1c5-a81c-405b-8415-c1c805820fb2", 00:12:42.883 "is_configured": true, 00:12:42.883 "data_offset": 2048, 00:12:42.883 "data_size": 63488 00:12:42.883 } 00:12:42.883 ] 00:12:42.883 }' 00:12:42.883 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:42.883 03:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.451 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.451 03:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.451 03:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.451 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:43.451 03:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.451 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:43.451 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.451 03:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.451 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:43.451 03:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.451 03:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.451 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 34d42ef0-0f73-4ecb-90d6-e944f6bdb872 00:12:43.451 03:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.451 03:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.451 [2024-10-09 03:15:26.664115] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:43.451 [2024-10-09 03:15:26.664509] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:43.451 [2024-10-09 03:15:26.664571] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:43.451 [2024-10-09 03:15:26.664939] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:43.451 NewBaseBdev 00:12:43.451 [2024-10-09 03:15:26.665243] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:43.451 [2024-10-09 03:15:26.665309] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:43.451 [2024-10-09 03:15:26.665506] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:43.452 03:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.452 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:43.452 03:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:12:43.452 03:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:12:43.452 03:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:12:43.452 03:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:12:43.452 03:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:12:43.452 03:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:12:43.452 03:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.452 03:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.452 03:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.452 03:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:43.452 03:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.452 03:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.452 [ 00:12:43.452 { 00:12:43.452 "name": "NewBaseBdev", 00:12:43.452 "aliases": [ 00:12:43.452 "34d42ef0-0f73-4ecb-90d6-e944f6bdb872" 00:12:43.452 ], 00:12:43.452 "product_name": "Malloc disk", 00:12:43.452 "block_size": 512, 00:12:43.452 "num_blocks": 65536, 00:12:43.452 "uuid": "34d42ef0-0f73-4ecb-90d6-e944f6bdb872", 00:12:43.452 "assigned_rate_limits": { 00:12:43.452 "rw_ios_per_sec": 0, 00:12:43.452 "rw_mbytes_per_sec": 0, 00:12:43.452 "r_mbytes_per_sec": 0, 00:12:43.452 "w_mbytes_per_sec": 0 00:12:43.452 }, 00:12:43.452 "claimed": true, 00:12:43.452 "claim_type": "exclusive_write", 00:12:43.452 "zoned": false, 00:12:43.452 "supported_io_types": { 00:12:43.452 "read": true, 00:12:43.452 "write": true, 00:12:43.452 "unmap": true, 00:12:43.452 "flush": true, 00:12:43.452 "reset": true, 00:12:43.452 "nvme_admin": false, 00:12:43.452 "nvme_io": false, 00:12:43.452 "nvme_io_md": false, 00:12:43.452 "write_zeroes": true, 00:12:43.452 "zcopy": true, 00:12:43.452 "get_zone_info": false, 00:12:43.452 "zone_management": false, 00:12:43.452 "zone_append": false, 00:12:43.452 "compare": false, 00:12:43.452 "compare_and_write": false, 00:12:43.452 "abort": true, 00:12:43.452 "seek_hole": false, 00:12:43.452 "seek_data": false, 00:12:43.452 "copy": true, 00:12:43.452 "nvme_iov_md": false 00:12:43.452 }, 00:12:43.452 "memory_domains": [ 00:12:43.452 { 00:12:43.452 "dma_device_id": "system", 00:12:43.452 "dma_device_type": 1 00:12:43.452 }, 00:12:43.452 { 00:12:43.452 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.452 "dma_device_type": 2 00:12:43.452 } 00:12:43.452 ], 00:12:43.452 "driver_specific": {} 00:12:43.452 } 00:12:43.452 ] 00:12:43.452 03:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.452 03:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:12:43.452 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:12:43.452 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.452 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:43.452 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:43.452 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:43.452 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:43.452 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.452 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.452 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.452 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.452 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.452 03:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.452 03:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.452 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.452 03:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.712 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.712 "name": "Existed_Raid", 00:12:43.712 "uuid": "bf8cf1df-b158-4039-abe1-19ecf3d684cb", 00:12:43.712 "strip_size_kb": 0, 00:12:43.712 "state": "online", 00:12:43.712 "raid_level": "raid1", 00:12:43.712 "superblock": true, 00:12:43.712 "num_base_bdevs": 4, 00:12:43.712 "num_base_bdevs_discovered": 4, 00:12:43.712 "num_base_bdevs_operational": 4, 00:12:43.712 "base_bdevs_list": [ 00:12:43.712 { 00:12:43.712 "name": "NewBaseBdev", 00:12:43.712 "uuid": "34d42ef0-0f73-4ecb-90d6-e944f6bdb872", 00:12:43.712 "is_configured": true, 00:12:43.712 "data_offset": 2048, 00:12:43.712 "data_size": 63488 00:12:43.712 }, 00:12:43.712 { 00:12:43.712 "name": "BaseBdev2", 00:12:43.712 "uuid": "e3a41273-5fa4-48d8-a1a4-4d60bb3547ca", 00:12:43.712 "is_configured": true, 00:12:43.712 "data_offset": 2048, 00:12:43.712 "data_size": 63488 00:12:43.712 }, 00:12:43.712 { 00:12:43.712 "name": "BaseBdev3", 00:12:43.712 "uuid": "a8967dc5-b76b-438b-8cf9-c8940408c8a3", 00:12:43.712 "is_configured": true, 00:12:43.712 "data_offset": 2048, 00:12:43.712 "data_size": 63488 00:12:43.712 }, 00:12:43.712 { 00:12:43.712 "name": "BaseBdev4", 00:12:43.712 "uuid": "397ec1c5-a81c-405b-8415-c1c805820fb2", 00:12:43.712 "is_configured": true, 00:12:43.712 "data_offset": 2048, 00:12:43.712 "data_size": 63488 00:12:43.712 } 00:12:43.712 ] 00:12:43.712 }' 00:12:43.712 03:15:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.712 03:15:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.971 03:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:43.971 03:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:43.971 03:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:43.971 03:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:43.971 03:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:43.971 03:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:43.971 03:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:43.971 03:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:43.971 03:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.971 03:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.971 [2024-10-09 03:15:27.135733] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:43.971 03:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:43.971 03:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:43.971 "name": "Existed_Raid", 00:12:43.971 "aliases": [ 00:12:43.971 "bf8cf1df-b158-4039-abe1-19ecf3d684cb" 00:12:43.971 ], 00:12:43.971 "product_name": "Raid Volume", 00:12:43.971 "block_size": 512, 00:12:43.971 "num_blocks": 63488, 00:12:43.971 "uuid": "bf8cf1df-b158-4039-abe1-19ecf3d684cb", 00:12:43.971 "assigned_rate_limits": { 00:12:43.971 "rw_ios_per_sec": 0, 00:12:43.971 "rw_mbytes_per_sec": 0, 00:12:43.971 "r_mbytes_per_sec": 0, 00:12:43.971 "w_mbytes_per_sec": 0 00:12:43.971 }, 00:12:43.971 "claimed": false, 00:12:43.971 "zoned": false, 00:12:43.971 "supported_io_types": { 00:12:43.971 "read": true, 00:12:43.971 "write": true, 00:12:43.971 "unmap": false, 00:12:43.971 "flush": false, 00:12:43.972 "reset": true, 00:12:43.972 "nvme_admin": false, 00:12:43.972 "nvme_io": false, 00:12:43.972 "nvme_io_md": false, 00:12:43.972 "write_zeroes": true, 00:12:43.972 "zcopy": false, 00:12:43.972 "get_zone_info": false, 00:12:43.972 "zone_management": false, 00:12:43.972 "zone_append": false, 00:12:43.972 "compare": false, 00:12:43.972 "compare_and_write": false, 00:12:43.972 "abort": false, 00:12:43.972 "seek_hole": false, 00:12:43.972 "seek_data": false, 00:12:43.972 "copy": false, 00:12:43.972 "nvme_iov_md": false 00:12:43.972 }, 00:12:43.972 "memory_domains": [ 00:12:43.972 { 00:12:43.972 "dma_device_id": "system", 00:12:43.972 "dma_device_type": 1 00:12:43.972 }, 00:12:43.972 { 00:12:43.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.972 "dma_device_type": 2 00:12:43.972 }, 00:12:43.972 { 00:12:43.972 "dma_device_id": "system", 00:12:43.972 "dma_device_type": 1 00:12:43.972 }, 00:12:43.972 { 00:12:43.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.972 "dma_device_type": 2 00:12:43.972 }, 00:12:43.972 { 00:12:43.972 "dma_device_id": "system", 00:12:43.972 "dma_device_type": 1 00:12:43.972 }, 00:12:43.972 { 00:12:43.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.972 "dma_device_type": 2 00:12:43.972 }, 00:12:43.972 { 00:12:43.972 "dma_device_id": "system", 00:12:43.972 "dma_device_type": 1 00:12:43.972 }, 00:12:43.972 { 00:12:43.972 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.972 "dma_device_type": 2 00:12:43.972 } 00:12:43.972 ], 00:12:43.972 "driver_specific": { 00:12:43.972 "raid": { 00:12:43.972 "uuid": "bf8cf1df-b158-4039-abe1-19ecf3d684cb", 00:12:43.972 "strip_size_kb": 0, 00:12:43.972 "state": "online", 00:12:43.972 "raid_level": "raid1", 00:12:43.972 "superblock": true, 00:12:43.972 "num_base_bdevs": 4, 00:12:43.972 "num_base_bdevs_discovered": 4, 00:12:43.972 "num_base_bdevs_operational": 4, 00:12:43.972 "base_bdevs_list": [ 00:12:43.972 { 00:12:43.972 "name": "NewBaseBdev", 00:12:43.972 "uuid": "34d42ef0-0f73-4ecb-90d6-e944f6bdb872", 00:12:43.972 "is_configured": true, 00:12:43.972 "data_offset": 2048, 00:12:43.972 "data_size": 63488 00:12:43.972 }, 00:12:43.972 { 00:12:43.972 "name": "BaseBdev2", 00:12:43.972 "uuid": "e3a41273-5fa4-48d8-a1a4-4d60bb3547ca", 00:12:43.972 "is_configured": true, 00:12:43.972 "data_offset": 2048, 00:12:43.972 "data_size": 63488 00:12:43.972 }, 00:12:43.972 { 00:12:43.972 "name": "BaseBdev3", 00:12:43.972 "uuid": "a8967dc5-b76b-438b-8cf9-c8940408c8a3", 00:12:43.972 "is_configured": true, 00:12:43.972 "data_offset": 2048, 00:12:43.972 "data_size": 63488 00:12:43.972 }, 00:12:43.972 { 00:12:43.972 "name": "BaseBdev4", 00:12:43.972 "uuid": "397ec1c5-a81c-405b-8415-c1c805820fb2", 00:12:43.972 "is_configured": true, 00:12:43.972 "data_offset": 2048, 00:12:43.972 "data_size": 63488 00:12:43.972 } 00:12:43.972 ] 00:12:43.972 } 00:12:43.972 } 00:12:43.972 }' 00:12:43.972 03:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:43.972 03:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:43.972 BaseBdev2 00:12:43.972 BaseBdev3 00:12:43.972 BaseBdev4' 00:12:43.972 03:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:43.972 03:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:43.972 03:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:43.972 03:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:43.972 03:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:43.972 03:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:43.972 03:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:44.231 03:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.231 03:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:44.231 03:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:44.231 03:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:44.231 03:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:44.231 03:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:44.231 03:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.231 03:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.231 03:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.231 03:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:44.231 03:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:44.231 03:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:44.231 03:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:44.231 03:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.231 03:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.231 03:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:44.231 03:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.231 03:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:44.231 03:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:44.231 03:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:44.231 03:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:44.231 03:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:44.231 03:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.231 03:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.231 03:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.232 03:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:44.232 03:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:44.232 03:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:44.232 03:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:44.232 03:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:44.232 [2024-10-09 03:15:27.470728] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:44.232 [2024-10-09 03:15:27.470850] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:44.232 [2024-10-09 03:15:27.470969] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:44.232 [2024-10-09 03:15:27.471292] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:44.232 [2024-10-09 03:15:27.471352] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:44.232 03:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:44.232 03:15:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 74038 00:12:44.232 03:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 74038 ']' 00:12:44.232 03:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 74038 00:12:44.232 03:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:12:44.232 03:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:44.232 03:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74038 00:12:44.232 killing process with pid 74038 00:12:44.232 03:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:44.232 03:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:44.232 03:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74038' 00:12:44.232 03:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 74038 00:12:44.232 [2024-10-09 03:15:27.517344] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:44.232 03:15:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 74038 00:12:44.800 [2024-10-09 03:15:27.939804] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:46.179 03:15:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:46.179 00:12:46.179 real 0m12.216s 00:12:46.179 user 0m18.969s 00:12:46.179 sys 0m2.246s 00:12:46.179 ************************************ 00:12:46.179 END TEST raid_state_function_test_sb 00:12:46.179 ************************************ 00:12:46.179 03:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:46.179 03:15:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:46.179 03:15:29 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:12:46.179 03:15:29 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:46.179 03:15:29 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:46.179 03:15:29 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:46.179 ************************************ 00:12:46.179 START TEST raid_superblock_test 00:12:46.179 ************************************ 00:12:46.179 03:15:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 4 00:12:46.179 03:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:12:46.179 03:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:46.179 03:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:46.179 03:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:46.179 03:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:46.179 03:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:46.179 03:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:46.179 03:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:46.179 03:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:46.179 03:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:46.179 03:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:46.179 03:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:46.179 03:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:46.179 03:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:12:46.179 03:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:12:46.179 03:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74719 00:12:46.179 03:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:46.179 03:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74719 00:12:46.179 03:15:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 74719 ']' 00:12:46.179 03:15:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:46.179 03:15:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:46.179 03:15:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:46.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:46.179 03:15:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:46.179 03:15:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.179 [2024-10-09 03:15:29.413738] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:12:46.179 [2024-10-09 03:15:29.413961] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74719 ] 00:12:46.439 [2024-10-09 03:15:29.556694] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:46.698 [2024-10-09 03:15:29.795175] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:46.964 [2024-10-09 03:15:30.028135] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:46.964 [2024-10-09 03:15:30.028171] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:46.964 03:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:46.964 03:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:12:46.964 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:46.964 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:46.964 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:46.964 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:46.964 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:46.964 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:46.964 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:46.964 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:46.964 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:46.964 03:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:46.964 03:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.233 malloc1 00:12:47.233 03:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.233 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.234 [2024-10-09 03:15:30.283027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:47.234 [2024-10-09 03:15:30.283183] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.234 [2024-10-09 03:15:30.283242] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:47.234 [2024-10-09 03:15:30.283281] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.234 [2024-10-09 03:15:30.285580] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.234 [2024-10-09 03:15:30.285655] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:47.234 pt1 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.234 malloc2 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.234 [2024-10-09 03:15:30.351310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:47.234 [2024-10-09 03:15:30.351414] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.234 [2024-10-09 03:15:30.351453] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:47.234 [2024-10-09 03:15:30.351484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.234 [2024-10-09 03:15:30.353739] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.234 [2024-10-09 03:15:30.353811] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:47.234 pt2 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.234 malloc3 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.234 [2024-10-09 03:15:30.414928] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:47.234 [2024-10-09 03:15:30.415023] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.234 [2024-10-09 03:15:30.415061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:47.234 [2024-10-09 03:15:30.415088] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.234 [2024-10-09 03:15:30.417363] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.234 [2024-10-09 03:15:30.417437] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:47.234 pt3 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.234 malloc4 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.234 [2024-10-09 03:15:30.478778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:47.234 [2024-10-09 03:15:30.478888] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:47.234 [2024-10-09 03:15:30.478931] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:47.234 [2024-10-09 03:15:30.478958] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:47.234 [2024-10-09 03:15:30.481238] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:47.234 [2024-10-09 03:15:30.481307] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:47.234 pt4 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.234 [2024-10-09 03:15:30.490830] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:47.234 [2024-10-09 03:15:30.492853] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:47.234 [2024-10-09 03:15:30.492952] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:47.234 [2024-10-09 03:15:30.493010] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:47.234 [2024-10-09 03:15:30.493221] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:47.234 [2024-10-09 03:15:30.493271] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:47.234 [2024-10-09 03:15:30.493544] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:47.234 [2024-10-09 03:15:30.493737] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:47.234 [2024-10-09 03:15:30.493782] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:47.234 [2024-10-09 03:15:30.493980] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.234 03:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.494 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.494 "name": "raid_bdev1", 00:12:47.494 "uuid": "683384f2-d94f-4fa9-8d18-304b03920cae", 00:12:47.494 "strip_size_kb": 0, 00:12:47.494 "state": "online", 00:12:47.494 "raid_level": "raid1", 00:12:47.494 "superblock": true, 00:12:47.494 "num_base_bdevs": 4, 00:12:47.494 "num_base_bdevs_discovered": 4, 00:12:47.494 "num_base_bdevs_operational": 4, 00:12:47.494 "base_bdevs_list": [ 00:12:47.494 { 00:12:47.494 "name": "pt1", 00:12:47.494 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:47.494 "is_configured": true, 00:12:47.494 "data_offset": 2048, 00:12:47.494 "data_size": 63488 00:12:47.494 }, 00:12:47.494 { 00:12:47.494 "name": "pt2", 00:12:47.494 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:47.494 "is_configured": true, 00:12:47.494 "data_offset": 2048, 00:12:47.494 "data_size": 63488 00:12:47.494 }, 00:12:47.494 { 00:12:47.494 "name": "pt3", 00:12:47.494 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:47.494 "is_configured": true, 00:12:47.494 "data_offset": 2048, 00:12:47.494 "data_size": 63488 00:12:47.494 }, 00:12:47.494 { 00:12:47.494 "name": "pt4", 00:12:47.494 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:47.494 "is_configured": true, 00:12:47.494 "data_offset": 2048, 00:12:47.494 "data_size": 63488 00:12:47.494 } 00:12:47.494 ] 00:12:47.494 }' 00:12:47.494 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.494 03:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.754 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:47.754 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:47.754 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:47.755 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:47.755 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:47.755 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:47.755 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:47.755 03:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:47.755 03:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:47.755 03:15:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.755 [2024-10-09 03:15:30.982349] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:47.755 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:47.755 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:47.755 "name": "raid_bdev1", 00:12:47.755 "aliases": [ 00:12:47.755 "683384f2-d94f-4fa9-8d18-304b03920cae" 00:12:47.755 ], 00:12:47.755 "product_name": "Raid Volume", 00:12:47.755 "block_size": 512, 00:12:47.755 "num_blocks": 63488, 00:12:47.755 "uuid": "683384f2-d94f-4fa9-8d18-304b03920cae", 00:12:47.755 "assigned_rate_limits": { 00:12:47.755 "rw_ios_per_sec": 0, 00:12:47.755 "rw_mbytes_per_sec": 0, 00:12:47.755 "r_mbytes_per_sec": 0, 00:12:47.755 "w_mbytes_per_sec": 0 00:12:47.755 }, 00:12:47.755 "claimed": false, 00:12:47.755 "zoned": false, 00:12:47.755 "supported_io_types": { 00:12:47.755 "read": true, 00:12:47.755 "write": true, 00:12:47.755 "unmap": false, 00:12:47.755 "flush": false, 00:12:47.755 "reset": true, 00:12:47.755 "nvme_admin": false, 00:12:47.755 "nvme_io": false, 00:12:47.755 "nvme_io_md": false, 00:12:47.755 "write_zeroes": true, 00:12:47.755 "zcopy": false, 00:12:47.755 "get_zone_info": false, 00:12:47.755 "zone_management": false, 00:12:47.755 "zone_append": false, 00:12:47.755 "compare": false, 00:12:47.755 "compare_and_write": false, 00:12:47.755 "abort": false, 00:12:47.755 "seek_hole": false, 00:12:47.755 "seek_data": false, 00:12:47.755 "copy": false, 00:12:47.755 "nvme_iov_md": false 00:12:47.755 }, 00:12:47.755 "memory_domains": [ 00:12:47.755 { 00:12:47.755 "dma_device_id": "system", 00:12:47.755 "dma_device_type": 1 00:12:47.755 }, 00:12:47.755 { 00:12:47.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.755 "dma_device_type": 2 00:12:47.755 }, 00:12:47.755 { 00:12:47.755 "dma_device_id": "system", 00:12:47.755 "dma_device_type": 1 00:12:47.755 }, 00:12:47.755 { 00:12:47.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.755 "dma_device_type": 2 00:12:47.755 }, 00:12:47.755 { 00:12:47.755 "dma_device_id": "system", 00:12:47.755 "dma_device_type": 1 00:12:47.755 }, 00:12:47.755 { 00:12:47.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.755 "dma_device_type": 2 00:12:47.755 }, 00:12:47.755 { 00:12:47.755 "dma_device_id": "system", 00:12:47.755 "dma_device_type": 1 00:12:47.755 }, 00:12:47.755 { 00:12:47.755 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.755 "dma_device_type": 2 00:12:47.755 } 00:12:47.755 ], 00:12:47.755 "driver_specific": { 00:12:47.755 "raid": { 00:12:47.755 "uuid": "683384f2-d94f-4fa9-8d18-304b03920cae", 00:12:47.755 "strip_size_kb": 0, 00:12:47.755 "state": "online", 00:12:47.755 "raid_level": "raid1", 00:12:47.755 "superblock": true, 00:12:47.755 "num_base_bdevs": 4, 00:12:47.755 "num_base_bdevs_discovered": 4, 00:12:47.755 "num_base_bdevs_operational": 4, 00:12:47.755 "base_bdevs_list": [ 00:12:47.755 { 00:12:47.755 "name": "pt1", 00:12:47.755 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:47.755 "is_configured": true, 00:12:47.755 "data_offset": 2048, 00:12:47.755 "data_size": 63488 00:12:47.755 }, 00:12:47.755 { 00:12:47.755 "name": "pt2", 00:12:47.755 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:47.755 "is_configured": true, 00:12:47.755 "data_offset": 2048, 00:12:47.755 "data_size": 63488 00:12:47.755 }, 00:12:47.755 { 00:12:47.755 "name": "pt3", 00:12:47.755 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:47.755 "is_configured": true, 00:12:47.755 "data_offset": 2048, 00:12:47.755 "data_size": 63488 00:12:47.755 }, 00:12:47.755 { 00:12:47.755 "name": "pt4", 00:12:47.755 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:47.755 "is_configured": true, 00:12:47.755 "data_offset": 2048, 00:12:47.755 "data_size": 63488 00:12:47.755 } 00:12:47.755 ] 00:12:47.755 } 00:12:47.755 } 00:12:47.755 }' 00:12:47.755 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:48.016 pt2 00:12:48.016 pt3 00:12:48.016 pt4' 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.016 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.276 [2024-10-09 03:15:31.321644] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:48.276 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.276 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=683384f2-d94f-4fa9-8d18-304b03920cae 00:12:48.276 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 683384f2-d94f-4fa9-8d18-304b03920cae ']' 00:12:48.276 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:48.276 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.276 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.276 [2024-10-09 03:15:31.361263] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:48.276 [2024-10-09 03:15:31.361346] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:48.276 [2024-10-09 03:15:31.361461] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:48.276 [2024-10-09 03:15:31.361578] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:48.276 [2024-10-09 03:15:31.361630] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:48.276 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.276 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:48.276 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.276 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.276 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.276 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.276 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:48.276 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:48.276 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:48.276 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:48.276 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.276 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.276 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.276 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:48.276 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:48.276 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.276 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.276 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.276 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:48.276 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:48.276 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.276 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.276 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.276 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:48.276 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:48.276 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.276 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.276 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.276 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:48.276 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.276 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.277 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:48.277 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.277 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:48.277 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:48.277 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:12:48.277 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:48.277 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:12:48.277 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.277 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:12:48.277 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:48.277 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:48.277 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.277 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.277 [2024-10-09 03:15:31.516993] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:48.277 [2024-10-09 03:15:31.519019] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:48.277 [2024-10-09 03:15:31.519107] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:48.277 [2024-10-09 03:15:31.519155] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:48.277 [2024-10-09 03:15:31.519228] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:48.277 [2024-10-09 03:15:31.519299] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:48.277 [2024-10-09 03:15:31.519376] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:48.277 [2024-10-09 03:15:31.519429] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:48.277 [2024-10-09 03:15:31.519469] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:48.277 [2024-10-09 03:15:31.519492] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:48.277 request: 00:12:48.277 { 00:12:48.277 "name": "raid_bdev1", 00:12:48.277 "raid_level": "raid1", 00:12:48.277 "base_bdevs": [ 00:12:48.277 "malloc1", 00:12:48.277 "malloc2", 00:12:48.277 "malloc3", 00:12:48.277 "malloc4" 00:12:48.277 ], 00:12:48.277 "superblock": false, 00:12:48.277 "method": "bdev_raid_create", 00:12:48.277 "req_id": 1 00:12:48.277 } 00:12:48.277 Got JSON-RPC error response 00:12:48.277 response: 00:12:48.277 { 00:12:48.277 "code": -17, 00:12:48.277 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:48.277 } 00:12:48.277 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:48.277 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:12:48.277 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:48.277 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:48.277 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:48.277 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.277 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.277 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.277 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:48.277 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.277 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:48.277 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:48.277 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:48.277 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.277 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.277 [2024-10-09 03:15:31.576889] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:48.277 [2024-10-09 03:15:31.576970] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.277 [2024-10-09 03:15:31.577000] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:48.277 [2024-10-09 03:15:31.577028] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.537 [2024-10-09 03:15:31.579297] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.537 [2024-10-09 03:15:31.579367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:48.537 [2024-10-09 03:15:31.579452] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:48.537 [2024-10-09 03:15:31.579536] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:48.537 pt1 00:12:48.537 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.537 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:48.537 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:48.537 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.537 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.537 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.537 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:48.537 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.537 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.537 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.537 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.537 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.537 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.537 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.537 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.537 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.537 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.537 "name": "raid_bdev1", 00:12:48.537 "uuid": "683384f2-d94f-4fa9-8d18-304b03920cae", 00:12:48.537 "strip_size_kb": 0, 00:12:48.537 "state": "configuring", 00:12:48.537 "raid_level": "raid1", 00:12:48.537 "superblock": true, 00:12:48.537 "num_base_bdevs": 4, 00:12:48.537 "num_base_bdevs_discovered": 1, 00:12:48.537 "num_base_bdevs_operational": 4, 00:12:48.537 "base_bdevs_list": [ 00:12:48.537 { 00:12:48.537 "name": "pt1", 00:12:48.537 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:48.537 "is_configured": true, 00:12:48.537 "data_offset": 2048, 00:12:48.537 "data_size": 63488 00:12:48.537 }, 00:12:48.537 { 00:12:48.537 "name": null, 00:12:48.537 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:48.537 "is_configured": false, 00:12:48.537 "data_offset": 2048, 00:12:48.537 "data_size": 63488 00:12:48.537 }, 00:12:48.537 { 00:12:48.537 "name": null, 00:12:48.537 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:48.537 "is_configured": false, 00:12:48.537 "data_offset": 2048, 00:12:48.537 "data_size": 63488 00:12:48.537 }, 00:12:48.537 { 00:12:48.537 "name": null, 00:12:48.537 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:48.537 "is_configured": false, 00:12:48.537 "data_offset": 2048, 00:12:48.537 "data_size": 63488 00:12:48.537 } 00:12:48.537 ] 00:12:48.537 }' 00:12:48.537 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.537 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.798 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:48.798 03:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:48.798 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.798 03:15:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.798 [2024-10-09 03:15:31.996206] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:48.798 [2024-10-09 03:15:31.996363] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:48.798 [2024-10-09 03:15:31.996404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:48.798 [2024-10-09 03:15:31.996434] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:48.798 [2024-10-09 03:15:31.997003] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:48.798 [2024-10-09 03:15:31.997074] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:48.798 [2024-10-09 03:15:31.997205] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:48.798 [2024-10-09 03:15:31.997269] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:48.798 pt2 00:12:48.798 03:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.798 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:48.798 03:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.798 03:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.798 [2024-10-09 03:15:32.008169] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:48.798 03:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.798 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:12:48.798 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:48.798 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:48.798 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:48.798 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:48.798 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:48.798 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:48.798 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:48.798 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:48.798 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:48.798 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:48.798 03:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:48.798 03:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.798 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.798 03:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:48.798 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:48.798 "name": "raid_bdev1", 00:12:48.798 "uuid": "683384f2-d94f-4fa9-8d18-304b03920cae", 00:12:48.798 "strip_size_kb": 0, 00:12:48.798 "state": "configuring", 00:12:48.798 "raid_level": "raid1", 00:12:48.798 "superblock": true, 00:12:48.798 "num_base_bdevs": 4, 00:12:48.798 "num_base_bdevs_discovered": 1, 00:12:48.798 "num_base_bdevs_operational": 4, 00:12:48.798 "base_bdevs_list": [ 00:12:48.798 { 00:12:48.798 "name": "pt1", 00:12:48.798 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:48.798 "is_configured": true, 00:12:48.798 "data_offset": 2048, 00:12:48.798 "data_size": 63488 00:12:48.798 }, 00:12:48.798 { 00:12:48.798 "name": null, 00:12:48.798 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:48.798 "is_configured": false, 00:12:48.798 "data_offset": 0, 00:12:48.798 "data_size": 63488 00:12:48.798 }, 00:12:48.798 { 00:12:48.798 "name": null, 00:12:48.798 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:48.798 "is_configured": false, 00:12:48.798 "data_offset": 2048, 00:12:48.798 "data_size": 63488 00:12:48.798 }, 00:12:48.798 { 00:12:48.798 "name": null, 00:12:48.798 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:48.798 "is_configured": false, 00:12:48.798 "data_offset": 2048, 00:12:48.798 "data_size": 63488 00:12:48.798 } 00:12:48.798 ] 00:12:48.798 }' 00:12:48.798 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:48.798 03:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.369 [2024-10-09 03:15:32.467436] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:49.369 [2024-10-09 03:15:32.467551] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.369 [2024-10-09 03:15:32.467594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:49.369 [2024-10-09 03:15:32.467623] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.369 [2024-10-09 03:15:32.468142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.369 [2024-10-09 03:15:32.468167] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:49.369 [2024-10-09 03:15:32.468260] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:49.369 [2024-10-09 03:15:32.468283] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:49.369 pt2 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.369 [2024-10-09 03:15:32.479393] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:49.369 [2024-10-09 03:15:32.479486] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.369 [2024-10-09 03:15:32.479523] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:49.369 [2024-10-09 03:15:32.479548] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.369 [2024-10-09 03:15:32.479943] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.369 [2024-10-09 03:15:32.479991] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:49.369 [2024-10-09 03:15:32.480074] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:49.369 [2024-10-09 03:15:32.480115] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:49.369 pt3 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.369 [2024-10-09 03:15:32.491344] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:49.369 [2024-10-09 03:15:32.491383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:49.369 [2024-10-09 03:15:32.491398] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:49.369 [2024-10-09 03:15:32.491406] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:49.369 [2024-10-09 03:15:32.491740] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:49.369 [2024-10-09 03:15:32.491754] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:49.369 [2024-10-09 03:15:32.491808] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:49.369 [2024-10-09 03:15:32.491823] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:49.369 [2024-10-09 03:15:32.491974] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:49.369 [2024-10-09 03:15:32.491983] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:49.369 [2024-10-09 03:15:32.492231] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:49.369 [2024-10-09 03:15:32.492384] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:49.369 [2024-10-09 03:15:32.492403] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:49.369 [2024-10-09 03:15:32.492547] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:49.369 pt4 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:49.369 "name": "raid_bdev1", 00:12:49.369 "uuid": "683384f2-d94f-4fa9-8d18-304b03920cae", 00:12:49.369 "strip_size_kb": 0, 00:12:49.369 "state": "online", 00:12:49.369 "raid_level": "raid1", 00:12:49.369 "superblock": true, 00:12:49.369 "num_base_bdevs": 4, 00:12:49.369 "num_base_bdevs_discovered": 4, 00:12:49.369 "num_base_bdevs_operational": 4, 00:12:49.369 "base_bdevs_list": [ 00:12:49.369 { 00:12:49.369 "name": "pt1", 00:12:49.369 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:49.369 "is_configured": true, 00:12:49.369 "data_offset": 2048, 00:12:49.369 "data_size": 63488 00:12:49.369 }, 00:12:49.369 { 00:12:49.369 "name": "pt2", 00:12:49.369 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:49.369 "is_configured": true, 00:12:49.369 "data_offset": 2048, 00:12:49.369 "data_size": 63488 00:12:49.369 }, 00:12:49.369 { 00:12:49.369 "name": "pt3", 00:12:49.369 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:49.369 "is_configured": true, 00:12:49.369 "data_offset": 2048, 00:12:49.369 "data_size": 63488 00:12:49.369 }, 00:12:49.369 { 00:12:49.369 "name": "pt4", 00:12:49.369 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:49.369 "is_configured": true, 00:12:49.369 "data_offset": 2048, 00:12:49.369 "data_size": 63488 00:12:49.369 } 00:12:49.369 ] 00:12:49.369 }' 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:49.369 03:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.629 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:49.629 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:49.629 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:49.629 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:49.629 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:49.629 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:49.629 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:49.629 03:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.629 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:49.629 03:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.629 [2024-10-09 03:15:32.910912] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:49.889 03:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.889 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:49.889 "name": "raid_bdev1", 00:12:49.889 "aliases": [ 00:12:49.889 "683384f2-d94f-4fa9-8d18-304b03920cae" 00:12:49.889 ], 00:12:49.889 "product_name": "Raid Volume", 00:12:49.889 "block_size": 512, 00:12:49.889 "num_blocks": 63488, 00:12:49.889 "uuid": "683384f2-d94f-4fa9-8d18-304b03920cae", 00:12:49.889 "assigned_rate_limits": { 00:12:49.889 "rw_ios_per_sec": 0, 00:12:49.889 "rw_mbytes_per_sec": 0, 00:12:49.889 "r_mbytes_per_sec": 0, 00:12:49.889 "w_mbytes_per_sec": 0 00:12:49.889 }, 00:12:49.889 "claimed": false, 00:12:49.889 "zoned": false, 00:12:49.889 "supported_io_types": { 00:12:49.889 "read": true, 00:12:49.889 "write": true, 00:12:49.889 "unmap": false, 00:12:49.889 "flush": false, 00:12:49.889 "reset": true, 00:12:49.889 "nvme_admin": false, 00:12:49.889 "nvme_io": false, 00:12:49.889 "nvme_io_md": false, 00:12:49.889 "write_zeroes": true, 00:12:49.889 "zcopy": false, 00:12:49.889 "get_zone_info": false, 00:12:49.889 "zone_management": false, 00:12:49.889 "zone_append": false, 00:12:49.889 "compare": false, 00:12:49.889 "compare_and_write": false, 00:12:49.889 "abort": false, 00:12:49.889 "seek_hole": false, 00:12:49.889 "seek_data": false, 00:12:49.889 "copy": false, 00:12:49.889 "nvme_iov_md": false 00:12:49.889 }, 00:12:49.889 "memory_domains": [ 00:12:49.889 { 00:12:49.889 "dma_device_id": "system", 00:12:49.889 "dma_device_type": 1 00:12:49.889 }, 00:12:49.889 { 00:12:49.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.889 "dma_device_type": 2 00:12:49.889 }, 00:12:49.889 { 00:12:49.889 "dma_device_id": "system", 00:12:49.889 "dma_device_type": 1 00:12:49.889 }, 00:12:49.889 { 00:12:49.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.889 "dma_device_type": 2 00:12:49.889 }, 00:12:49.889 { 00:12:49.889 "dma_device_id": "system", 00:12:49.889 "dma_device_type": 1 00:12:49.889 }, 00:12:49.889 { 00:12:49.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.889 "dma_device_type": 2 00:12:49.889 }, 00:12:49.889 { 00:12:49.889 "dma_device_id": "system", 00:12:49.889 "dma_device_type": 1 00:12:49.889 }, 00:12:49.889 { 00:12:49.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:49.889 "dma_device_type": 2 00:12:49.889 } 00:12:49.889 ], 00:12:49.889 "driver_specific": { 00:12:49.889 "raid": { 00:12:49.889 "uuid": "683384f2-d94f-4fa9-8d18-304b03920cae", 00:12:49.889 "strip_size_kb": 0, 00:12:49.889 "state": "online", 00:12:49.889 "raid_level": "raid1", 00:12:49.889 "superblock": true, 00:12:49.889 "num_base_bdevs": 4, 00:12:49.889 "num_base_bdevs_discovered": 4, 00:12:49.889 "num_base_bdevs_operational": 4, 00:12:49.889 "base_bdevs_list": [ 00:12:49.889 { 00:12:49.889 "name": "pt1", 00:12:49.889 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:49.889 "is_configured": true, 00:12:49.889 "data_offset": 2048, 00:12:49.889 "data_size": 63488 00:12:49.889 }, 00:12:49.889 { 00:12:49.889 "name": "pt2", 00:12:49.889 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:49.889 "is_configured": true, 00:12:49.889 "data_offset": 2048, 00:12:49.889 "data_size": 63488 00:12:49.889 }, 00:12:49.889 { 00:12:49.889 "name": "pt3", 00:12:49.889 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:49.889 "is_configured": true, 00:12:49.889 "data_offset": 2048, 00:12:49.889 "data_size": 63488 00:12:49.889 }, 00:12:49.889 { 00:12:49.889 "name": "pt4", 00:12:49.889 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:49.889 "is_configured": true, 00:12:49.889 "data_offset": 2048, 00:12:49.889 "data_size": 63488 00:12:49.889 } 00:12:49.889 ] 00:12:49.889 } 00:12:49.889 } 00:12:49.889 }' 00:12:49.890 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:49.890 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:49.890 pt2 00:12:49.890 pt3 00:12:49.890 pt4' 00:12:49.890 03:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:49.890 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:49.890 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:49.890 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:49.890 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:49.890 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.890 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.890 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.890 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:49.890 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:49.890 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:49.890 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:49.890 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:49.890 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.890 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.890 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.890 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:49.890 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:49.890 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:49.890 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:49.890 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:49.890 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.890 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.890 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:49.890 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:49.890 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:49.890 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:49.890 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:49.890 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:49.890 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:49.890 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.149 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.149 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:50.149 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:50.149 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:50.149 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:50.149 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.149 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.149 [2024-10-09 03:15:33.230302] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:50.149 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.149 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 683384f2-d94f-4fa9-8d18-304b03920cae '!=' 683384f2-d94f-4fa9-8d18-304b03920cae ']' 00:12:50.149 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:12:50.149 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:50.149 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:50.149 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:12:50.149 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.149 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.149 [2024-10-09 03:15:33.266025] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:50.149 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.149 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:50.149 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.149 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:50.149 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.149 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.149 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:50.149 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.149 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.149 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.149 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.149 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.149 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.149 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.149 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.149 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.149 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.149 "name": "raid_bdev1", 00:12:50.149 "uuid": "683384f2-d94f-4fa9-8d18-304b03920cae", 00:12:50.149 "strip_size_kb": 0, 00:12:50.149 "state": "online", 00:12:50.149 "raid_level": "raid1", 00:12:50.149 "superblock": true, 00:12:50.149 "num_base_bdevs": 4, 00:12:50.149 "num_base_bdevs_discovered": 3, 00:12:50.149 "num_base_bdevs_operational": 3, 00:12:50.149 "base_bdevs_list": [ 00:12:50.149 { 00:12:50.149 "name": null, 00:12:50.149 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.149 "is_configured": false, 00:12:50.149 "data_offset": 0, 00:12:50.149 "data_size": 63488 00:12:50.149 }, 00:12:50.149 { 00:12:50.149 "name": "pt2", 00:12:50.149 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:50.149 "is_configured": true, 00:12:50.149 "data_offset": 2048, 00:12:50.149 "data_size": 63488 00:12:50.149 }, 00:12:50.150 { 00:12:50.150 "name": "pt3", 00:12:50.150 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:50.150 "is_configured": true, 00:12:50.150 "data_offset": 2048, 00:12:50.150 "data_size": 63488 00:12:50.150 }, 00:12:50.150 { 00:12:50.150 "name": "pt4", 00:12:50.150 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:50.150 "is_configured": true, 00:12:50.150 "data_offset": 2048, 00:12:50.150 "data_size": 63488 00:12:50.150 } 00:12:50.150 ] 00:12:50.150 }' 00:12:50.150 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.150 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.718 [2024-10-09 03:15:33.717294] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:50.718 [2024-10-09 03:15:33.717398] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:50.718 [2024-10-09 03:15:33.717536] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:50.718 [2024-10-09 03:15:33.717648] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:50.718 [2024-10-09 03:15:33.717689] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.718 [2024-10-09 03:15:33.817056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:50.718 [2024-10-09 03:15:33.817153] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:50.718 [2024-10-09 03:15:33.817200] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:12:50.718 [2024-10-09 03:15:33.817230] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:50.718 [2024-10-09 03:15:33.819640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:50.718 [2024-10-09 03:15:33.819705] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:50.718 [2024-10-09 03:15:33.819810] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:50.718 [2024-10-09 03:15:33.819897] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:50.718 pt2 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:50.718 "name": "raid_bdev1", 00:12:50.718 "uuid": "683384f2-d94f-4fa9-8d18-304b03920cae", 00:12:50.718 "strip_size_kb": 0, 00:12:50.718 "state": "configuring", 00:12:50.718 "raid_level": "raid1", 00:12:50.718 "superblock": true, 00:12:50.718 "num_base_bdevs": 4, 00:12:50.718 "num_base_bdevs_discovered": 1, 00:12:50.718 "num_base_bdevs_operational": 3, 00:12:50.718 "base_bdevs_list": [ 00:12:50.718 { 00:12:50.718 "name": null, 00:12:50.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.718 "is_configured": false, 00:12:50.718 "data_offset": 2048, 00:12:50.718 "data_size": 63488 00:12:50.718 }, 00:12:50.718 { 00:12:50.718 "name": "pt2", 00:12:50.718 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:50.718 "is_configured": true, 00:12:50.718 "data_offset": 2048, 00:12:50.718 "data_size": 63488 00:12:50.718 }, 00:12:50.718 { 00:12:50.718 "name": null, 00:12:50.718 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:50.718 "is_configured": false, 00:12:50.718 "data_offset": 2048, 00:12:50.718 "data_size": 63488 00:12:50.718 }, 00:12:50.718 { 00:12:50.718 "name": null, 00:12:50.718 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:50.718 "is_configured": false, 00:12:50.718 "data_offset": 2048, 00:12:50.718 "data_size": 63488 00:12:50.718 } 00:12:50.718 ] 00:12:50.718 }' 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:50.718 03:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.978 03:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:50.978 03:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:50.978 03:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:50.978 03:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:50.978 03:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.978 [2024-10-09 03:15:34.272333] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:50.978 [2024-10-09 03:15:34.272455] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:50.978 [2024-10-09 03:15:34.272507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:12:50.978 [2024-10-09 03:15:34.272538] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:50.978 [2024-10-09 03:15:34.273068] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:50.978 [2024-10-09 03:15:34.273126] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:50.978 [2024-10-09 03:15:34.273232] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:50.978 [2024-10-09 03:15:34.273279] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:50.978 pt3 00:12:50.978 03:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:50.978 03:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:50.978 03:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:50.978 03:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:50.978 03:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:50.978 03:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:50.978 03:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:50.978 03:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:50.978 03:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.237 03:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.237 03:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.237 03:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.237 03:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.237 03:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.237 03:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.237 03:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.237 03:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.237 "name": "raid_bdev1", 00:12:51.237 "uuid": "683384f2-d94f-4fa9-8d18-304b03920cae", 00:12:51.237 "strip_size_kb": 0, 00:12:51.237 "state": "configuring", 00:12:51.237 "raid_level": "raid1", 00:12:51.237 "superblock": true, 00:12:51.237 "num_base_bdevs": 4, 00:12:51.237 "num_base_bdevs_discovered": 2, 00:12:51.237 "num_base_bdevs_operational": 3, 00:12:51.237 "base_bdevs_list": [ 00:12:51.237 { 00:12:51.237 "name": null, 00:12:51.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.237 "is_configured": false, 00:12:51.237 "data_offset": 2048, 00:12:51.237 "data_size": 63488 00:12:51.237 }, 00:12:51.237 { 00:12:51.237 "name": "pt2", 00:12:51.237 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:51.237 "is_configured": true, 00:12:51.237 "data_offset": 2048, 00:12:51.237 "data_size": 63488 00:12:51.237 }, 00:12:51.237 { 00:12:51.237 "name": "pt3", 00:12:51.237 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:51.237 "is_configured": true, 00:12:51.237 "data_offset": 2048, 00:12:51.237 "data_size": 63488 00:12:51.237 }, 00:12:51.237 { 00:12:51.237 "name": null, 00:12:51.237 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:51.237 "is_configured": false, 00:12:51.237 "data_offset": 2048, 00:12:51.237 "data_size": 63488 00:12:51.237 } 00:12:51.237 ] 00:12:51.237 }' 00:12:51.237 03:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.237 03:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.498 03:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:12:51.498 03:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:12:51.498 03:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:12:51.498 03:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:51.498 03:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.498 03:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.498 [2024-10-09 03:15:34.755556] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:51.498 [2024-10-09 03:15:34.755678] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:51.498 [2024-10-09 03:15:34.755720] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:12:51.498 [2024-10-09 03:15:34.755746] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:51.498 [2024-10-09 03:15:34.756274] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:51.498 [2024-10-09 03:15:34.756332] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:51.498 [2024-10-09 03:15:34.756441] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:51.498 [2024-10-09 03:15:34.756496] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:51.498 [2024-10-09 03:15:34.756679] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:51.498 [2024-10-09 03:15:34.756714] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:51.498 [2024-10-09 03:15:34.756997] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:51.498 [2024-10-09 03:15:34.757187] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:51.498 [2024-10-09 03:15:34.757232] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:12:51.498 [2024-10-09 03:15:34.757395] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:51.498 pt4 00:12:51.498 03:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.498 03:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:51.498 03:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:51.498 03:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:51.498 03:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:51.498 03:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:51.498 03:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:51.498 03:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.498 03:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.498 03:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.498 03:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.498 03:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.498 03:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:51.498 03:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.498 03:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:51.498 03:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:51.758 03:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.758 "name": "raid_bdev1", 00:12:51.758 "uuid": "683384f2-d94f-4fa9-8d18-304b03920cae", 00:12:51.758 "strip_size_kb": 0, 00:12:51.758 "state": "online", 00:12:51.758 "raid_level": "raid1", 00:12:51.758 "superblock": true, 00:12:51.758 "num_base_bdevs": 4, 00:12:51.758 "num_base_bdevs_discovered": 3, 00:12:51.758 "num_base_bdevs_operational": 3, 00:12:51.758 "base_bdevs_list": [ 00:12:51.758 { 00:12:51.758 "name": null, 00:12:51.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.758 "is_configured": false, 00:12:51.758 "data_offset": 2048, 00:12:51.758 "data_size": 63488 00:12:51.758 }, 00:12:51.758 { 00:12:51.758 "name": "pt2", 00:12:51.758 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:51.758 "is_configured": true, 00:12:51.758 "data_offset": 2048, 00:12:51.758 "data_size": 63488 00:12:51.758 }, 00:12:51.758 { 00:12:51.758 "name": "pt3", 00:12:51.758 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:51.758 "is_configured": true, 00:12:51.758 "data_offset": 2048, 00:12:51.758 "data_size": 63488 00:12:51.758 }, 00:12:51.758 { 00:12:51.758 "name": "pt4", 00:12:51.758 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:51.758 "is_configured": true, 00:12:51.758 "data_offset": 2048, 00:12:51.758 "data_size": 63488 00:12:51.758 } 00:12:51.758 ] 00:12:51.758 }' 00:12:51.758 03:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.758 03:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.017 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:52.017 03:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.017 03:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.017 [2024-10-09 03:15:35.186764] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:52.017 [2024-10-09 03:15:35.186835] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:52.017 [2024-10-09 03:15:35.186934] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:52.017 [2024-10-09 03:15:35.187013] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:52.018 [2024-10-09 03:15:35.187077] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:12:52.018 03:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.018 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.018 03:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.018 03:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.018 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:12:52.018 03:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.018 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:12:52.018 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:12:52.018 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:12:52.018 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:12:52.018 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:12:52.018 03:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.018 03:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.018 03:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.018 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:52.018 03:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.018 03:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.018 [2024-10-09 03:15:35.262642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:52.018 [2024-10-09 03:15:35.262736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:52.018 [2024-10-09 03:15:35.262780] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:12:52.018 [2024-10-09 03:15:35.262810] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:52.018 [2024-10-09 03:15:35.265141] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:52.018 [2024-10-09 03:15:35.265214] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:52.018 [2024-10-09 03:15:35.265299] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:52.018 [2024-10-09 03:15:35.265367] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:52.018 [2024-10-09 03:15:35.265523] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:52.018 [2024-10-09 03:15:35.265577] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:52.018 [2024-10-09 03:15:35.265606] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:12:52.018 [2024-10-09 03:15:35.265710] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:52.018 [2024-10-09 03:15:35.265853] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:52.018 pt1 00:12:52.018 03:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.018 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:12:52.018 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:52.018 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.018 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.018 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.018 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.018 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.018 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.018 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.018 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.018 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.018 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.018 03:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.018 03:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.018 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.018 03:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.277 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.277 "name": "raid_bdev1", 00:12:52.277 "uuid": "683384f2-d94f-4fa9-8d18-304b03920cae", 00:12:52.277 "strip_size_kb": 0, 00:12:52.277 "state": "configuring", 00:12:52.277 "raid_level": "raid1", 00:12:52.277 "superblock": true, 00:12:52.277 "num_base_bdevs": 4, 00:12:52.277 "num_base_bdevs_discovered": 2, 00:12:52.277 "num_base_bdevs_operational": 3, 00:12:52.277 "base_bdevs_list": [ 00:12:52.277 { 00:12:52.277 "name": null, 00:12:52.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.277 "is_configured": false, 00:12:52.277 "data_offset": 2048, 00:12:52.277 "data_size": 63488 00:12:52.277 }, 00:12:52.277 { 00:12:52.277 "name": "pt2", 00:12:52.277 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:52.277 "is_configured": true, 00:12:52.277 "data_offset": 2048, 00:12:52.277 "data_size": 63488 00:12:52.277 }, 00:12:52.277 { 00:12:52.277 "name": "pt3", 00:12:52.277 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:52.277 "is_configured": true, 00:12:52.277 "data_offset": 2048, 00:12:52.277 "data_size": 63488 00:12:52.277 }, 00:12:52.277 { 00:12:52.277 "name": null, 00:12:52.277 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:52.277 "is_configured": false, 00:12:52.277 "data_offset": 2048, 00:12:52.277 "data_size": 63488 00:12:52.277 } 00:12:52.277 ] 00:12:52.277 }' 00:12:52.277 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.277 03:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.536 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:52.536 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:12:52.536 03:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.536 03:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.536 03:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.536 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:12:52.536 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:52.536 03:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.536 03:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.536 [2024-10-09 03:15:35.725905] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:52.536 [2024-10-09 03:15:35.726003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:52.536 [2024-10-09 03:15:35.726048] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:12:52.536 [2024-10-09 03:15:35.726081] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:52.536 [2024-10-09 03:15:35.726519] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:52.536 [2024-10-09 03:15:35.726578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:52.536 [2024-10-09 03:15:35.726673] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:52.536 [2024-10-09 03:15:35.726717] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:52.536 [2024-10-09 03:15:35.726873] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:12:52.536 [2024-10-09 03:15:35.726910] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:52.536 [2024-10-09 03:15:35.727178] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:12:52.536 [2024-10-09 03:15:35.727349] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:12:52.536 [2024-10-09 03:15:35.727389] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:12:52.536 [2024-10-09 03:15:35.727550] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:52.536 pt4 00:12:52.536 03:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.536 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:52.536 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:52.536 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:52.536 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:52.536 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:52.536 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:52.536 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.536 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.536 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.536 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.536 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.536 03:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:52.536 03:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.537 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:52.537 03:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:52.537 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.537 "name": "raid_bdev1", 00:12:52.537 "uuid": "683384f2-d94f-4fa9-8d18-304b03920cae", 00:12:52.537 "strip_size_kb": 0, 00:12:52.537 "state": "online", 00:12:52.537 "raid_level": "raid1", 00:12:52.537 "superblock": true, 00:12:52.537 "num_base_bdevs": 4, 00:12:52.537 "num_base_bdevs_discovered": 3, 00:12:52.537 "num_base_bdevs_operational": 3, 00:12:52.537 "base_bdevs_list": [ 00:12:52.537 { 00:12:52.537 "name": null, 00:12:52.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.537 "is_configured": false, 00:12:52.537 "data_offset": 2048, 00:12:52.537 "data_size": 63488 00:12:52.537 }, 00:12:52.537 { 00:12:52.537 "name": "pt2", 00:12:52.537 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:52.537 "is_configured": true, 00:12:52.537 "data_offset": 2048, 00:12:52.537 "data_size": 63488 00:12:52.537 }, 00:12:52.537 { 00:12:52.537 "name": "pt3", 00:12:52.537 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:52.537 "is_configured": true, 00:12:52.537 "data_offset": 2048, 00:12:52.537 "data_size": 63488 00:12:52.537 }, 00:12:52.537 { 00:12:52.537 "name": "pt4", 00:12:52.537 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:52.537 "is_configured": true, 00:12:52.537 "data_offset": 2048, 00:12:52.537 "data_size": 63488 00:12:52.537 } 00:12:52.537 ] 00:12:52.537 }' 00:12:52.537 03:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.537 03:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.104 03:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:12:53.104 03:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:53.104 03:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.104 03:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.104 03:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.104 03:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:12:53.104 03:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:53.104 03:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:53.104 03:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.104 03:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:12:53.104 [2024-10-09 03:15:36.193307] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:53.104 03:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:53.104 03:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 683384f2-d94f-4fa9-8d18-304b03920cae '!=' 683384f2-d94f-4fa9-8d18-304b03920cae ']' 00:12:53.104 03:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74719 00:12:53.104 03:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 74719 ']' 00:12:53.104 03:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # kill -0 74719 00:12:53.105 03:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # uname 00:12:53.105 03:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:53.105 03:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 74719 00:12:53.105 killing process with pid 74719 00:12:53.105 03:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:53.105 03:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:53.105 03:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 74719' 00:12:53.105 03:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@969 -- # kill 74719 00:12:53.105 [2024-10-09 03:15:36.278996] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:53.105 [2024-10-09 03:15:36.279071] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:53.105 [2024-10-09 03:15:36.279133] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:53.105 [2024-10-09 03:15:36.279146] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:12:53.105 03:15:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@974 -- # wait 74719 00:12:53.674 [2024-10-09 03:15:36.686675] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:55.056 ************************************ 00:12:55.056 END TEST raid_superblock_test 00:12:55.056 ************************************ 00:12:55.056 03:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:55.056 00:12:55.056 real 0m8.693s 00:12:55.056 user 0m13.353s 00:12:55.056 sys 0m1.688s 00:12:55.056 03:15:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:55.056 03:15:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.056 03:15:38 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:12:55.056 03:15:38 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:55.056 03:15:38 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:55.056 03:15:38 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:55.056 ************************************ 00:12:55.056 START TEST raid_read_error_test 00:12:55.056 ************************************ 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 read 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.OLb3apn3bq 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75211 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75211 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@831 -- # '[' -z 75211 ']' 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:55.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:55.056 03:15:38 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.056 [2024-10-09 03:15:38.200696] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:12:55.056 [2024-10-09 03:15:38.201372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75211 ] 00:12:55.316 [2024-10-09 03:15:38.389346] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:55.575 [2024-10-09 03:15:38.644087] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.575 [2024-10-09 03:15:38.876078] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:55.575 [2024-10-09 03:15:38.876252] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:55.835 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:55.835 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # return 0 00:12:55.835 03:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:55.835 03:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:55.835 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.835 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.835 BaseBdev1_malloc 00:12:55.835 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.835 03:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:55.835 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.835 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.835 true 00:12:55.835 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.835 03:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:55.835 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.835 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.835 [2024-10-09 03:15:39.107241] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:55.835 [2024-10-09 03:15:39.107401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:55.835 [2024-10-09 03:15:39.107440] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:55.835 [2024-10-09 03:15:39.107475] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:55.835 [2024-10-09 03:15:39.109881] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:55.835 [2024-10-09 03:15:39.109959] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:55.835 BaseBdev1 00:12:55.835 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.835 03:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:55.835 03:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:55.835 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.835 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.096 BaseBdev2_malloc 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.096 true 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.096 [2024-10-09 03:15:39.185912] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:56.096 [2024-10-09 03:15:39.186026] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.096 [2024-10-09 03:15:39.186067] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:56.096 [2024-10-09 03:15:39.186104] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.096 [2024-10-09 03:15:39.188362] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.096 [2024-10-09 03:15:39.188440] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:56.096 BaseBdev2 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.096 BaseBdev3_malloc 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.096 true 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.096 [2024-10-09 03:15:39.259194] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:56.096 [2024-10-09 03:15:39.259314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.096 [2024-10-09 03:15:39.259348] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:56.096 [2024-10-09 03:15:39.259400] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.096 [2024-10-09 03:15:39.261633] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.096 [2024-10-09 03:15:39.261707] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:56.096 BaseBdev3 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.096 BaseBdev4_malloc 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.096 true 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.096 [2024-10-09 03:15:39.332988] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:56.096 [2024-10-09 03:15:39.333107] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:56.096 [2024-10-09 03:15:39.333142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:56.096 [2024-10-09 03:15:39.333187] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:56.096 [2024-10-09 03:15:39.335393] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:56.096 [2024-10-09 03:15:39.335468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:56.096 BaseBdev4 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.096 [2024-10-09 03:15:39.345047] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:56.096 [2024-10-09 03:15:39.347066] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:56.096 [2024-10-09 03:15:39.347187] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:56.096 [2024-10-09 03:15:39.347283] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:56.096 [2024-10-09 03:15:39.347544] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:56.096 [2024-10-09 03:15:39.347593] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:56.096 [2024-10-09 03:15:39.347859] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:56.096 [2024-10-09 03:15:39.348076] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:56.096 [2024-10-09 03:15:39.348116] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:56.096 [2024-10-09 03:15:39.348313] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.096 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:56.356 03:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.356 "name": "raid_bdev1", 00:12:56.356 "uuid": "efd575dd-849c-4f5c-8e4f-65876f8af318", 00:12:56.356 "strip_size_kb": 0, 00:12:56.356 "state": "online", 00:12:56.356 "raid_level": "raid1", 00:12:56.356 "superblock": true, 00:12:56.356 "num_base_bdevs": 4, 00:12:56.356 "num_base_bdevs_discovered": 4, 00:12:56.356 "num_base_bdevs_operational": 4, 00:12:56.356 "base_bdevs_list": [ 00:12:56.356 { 00:12:56.356 "name": "BaseBdev1", 00:12:56.356 "uuid": "6c37d985-382b-59ae-80e6-9303c06cbb4f", 00:12:56.356 "is_configured": true, 00:12:56.356 "data_offset": 2048, 00:12:56.356 "data_size": 63488 00:12:56.356 }, 00:12:56.356 { 00:12:56.356 "name": "BaseBdev2", 00:12:56.356 "uuid": "480aaa5c-64e6-598f-aebc-3e5b8fbf3d37", 00:12:56.356 "is_configured": true, 00:12:56.356 "data_offset": 2048, 00:12:56.356 "data_size": 63488 00:12:56.356 }, 00:12:56.356 { 00:12:56.356 "name": "BaseBdev3", 00:12:56.356 "uuid": "fe7d8c79-f87d-5d81-b3b7-d7503fec4383", 00:12:56.356 "is_configured": true, 00:12:56.356 "data_offset": 2048, 00:12:56.356 "data_size": 63488 00:12:56.356 }, 00:12:56.356 { 00:12:56.356 "name": "BaseBdev4", 00:12:56.356 "uuid": "693e9267-f6c4-5b73-833a-d94468e2cba5", 00:12:56.356 "is_configured": true, 00:12:56.356 "data_offset": 2048, 00:12:56.356 "data_size": 63488 00:12:56.356 } 00:12:56.356 ] 00:12:56.356 }' 00:12:56.356 03:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.356 03:15:39 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.615 03:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:56.615 03:15:39 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:56.874 [2024-10-09 03:15:39.929441] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:12:57.814 03:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:57.814 03:15:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.814 03:15:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.814 03:15:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.814 03:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:57.814 03:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:57.814 03:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:12:57.814 03:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:57.814 03:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:12:57.814 03:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:57.814 03:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:57.814 03:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:12:57.814 03:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:12:57.814 03:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:57.814 03:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.814 03:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.814 03:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.814 03:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.814 03:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.814 03:15:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:57.814 03:15:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.814 03:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:57.814 03:15:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:57.814 03:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.814 "name": "raid_bdev1", 00:12:57.814 "uuid": "efd575dd-849c-4f5c-8e4f-65876f8af318", 00:12:57.814 "strip_size_kb": 0, 00:12:57.814 "state": "online", 00:12:57.814 "raid_level": "raid1", 00:12:57.814 "superblock": true, 00:12:57.814 "num_base_bdevs": 4, 00:12:57.814 "num_base_bdevs_discovered": 4, 00:12:57.815 "num_base_bdevs_operational": 4, 00:12:57.815 "base_bdevs_list": [ 00:12:57.815 { 00:12:57.815 "name": "BaseBdev1", 00:12:57.815 "uuid": "6c37d985-382b-59ae-80e6-9303c06cbb4f", 00:12:57.815 "is_configured": true, 00:12:57.815 "data_offset": 2048, 00:12:57.815 "data_size": 63488 00:12:57.815 }, 00:12:57.815 { 00:12:57.815 "name": "BaseBdev2", 00:12:57.815 "uuid": "480aaa5c-64e6-598f-aebc-3e5b8fbf3d37", 00:12:57.815 "is_configured": true, 00:12:57.815 "data_offset": 2048, 00:12:57.815 "data_size": 63488 00:12:57.815 }, 00:12:57.815 { 00:12:57.815 "name": "BaseBdev3", 00:12:57.815 "uuid": "fe7d8c79-f87d-5d81-b3b7-d7503fec4383", 00:12:57.815 "is_configured": true, 00:12:57.815 "data_offset": 2048, 00:12:57.815 "data_size": 63488 00:12:57.815 }, 00:12:57.815 { 00:12:57.815 "name": "BaseBdev4", 00:12:57.815 "uuid": "693e9267-f6c4-5b73-833a-d94468e2cba5", 00:12:57.815 "is_configured": true, 00:12:57.815 "data_offset": 2048, 00:12:57.815 "data_size": 63488 00:12:57.815 } 00:12:57.815 ] 00:12:57.815 }' 00:12:57.815 03:15:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.815 03:15:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.074 03:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:58.074 03:15:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.074 03:15:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:58.074 [2024-10-09 03:15:41.359464] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:58.074 [2024-10-09 03:15:41.359522] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:58.074 [2024-10-09 03:15:41.362380] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:58.074 [2024-10-09 03:15:41.362447] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:58.074 [2024-10-09 03:15:41.362571] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:58.074 [2024-10-09 03:15:41.362584] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:58.074 { 00:12:58.074 "results": [ 00:12:58.074 { 00:12:58.074 "job": "raid_bdev1", 00:12:58.074 "core_mask": "0x1", 00:12:58.074 "workload": "randrw", 00:12:58.074 "percentage": 50, 00:12:58.074 "status": "finished", 00:12:58.074 "queue_depth": 1, 00:12:58.074 "io_size": 131072, 00:12:58.074 "runtime": 1.430996, 00:12:58.074 "iops": 8184.509250899374, 00:12:58.074 "mibps": 1023.0636563624217, 00:12:58.074 "io_failed": 0, 00:12:58.074 "io_timeout": 0, 00:12:58.074 "avg_latency_us": 119.63919631565133, 00:12:58.074 "min_latency_us": 23.252401746724892, 00:12:58.074 "max_latency_us": 1337.907423580786 00:12:58.074 } 00:12:58.074 ], 00:12:58.074 "core_count": 1 00:12:58.074 } 00:12:58.074 03:15:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.074 03:15:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75211 00:12:58.074 03:15:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@950 -- # '[' -z 75211 ']' 00:12:58.074 03:15:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # kill -0 75211 00:12:58.074 03:15:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # uname 00:12:58.074 03:15:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:58.074 03:15:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75211 00:12:58.333 killing process with pid 75211 00:12:58.333 03:15:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:58.333 03:15:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:58.333 03:15:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75211' 00:12:58.333 03:15:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@969 -- # kill 75211 00:12:58.333 [2024-10-09 03:15:41.408403] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:58.333 03:15:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@974 -- # wait 75211 00:12:58.593 [2024-10-09 03:15:41.759913] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:59.972 03:15:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.OLb3apn3bq 00:12:59.972 03:15:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:59.972 03:15:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:59.972 ************************************ 00:12:59.972 END TEST raid_read_error_test 00:12:59.972 ************************************ 00:12:59.972 03:15:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:12:59.972 03:15:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:12:59.972 03:15:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:59.972 03:15:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:12:59.972 03:15:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:59.972 00:12:59.972 real 0m5.096s 00:12:59.972 user 0m5.905s 00:12:59.972 sys 0m0.751s 00:12:59.972 03:15:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:59.972 03:15:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.972 03:15:43 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:12:59.972 03:15:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:12:59.972 03:15:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:59.972 03:15:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:59.972 ************************************ 00:12:59.972 START TEST raid_write_error_test 00:12:59.972 ************************************ 00:12:59.972 03:15:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1125 -- # raid_io_error_test raid1 4 write 00:12:59.972 03:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:12:59.972 03:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:59.972 03:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:59.972 03:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:59.972 03:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:59.972 03:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:59.972 03:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:59.972 03:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:59.972 03:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:59.972 03:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:59.972 03:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:59.972 03:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:59.972 03:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:59.972 03:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:59.972 03:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:59.972 03:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:59.972 03:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:59.972 03:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:59.972 03:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:59.972 03:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:59.972 03:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:59.972 03:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:59.972 03:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:59.972 03:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:59.972 03:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:12:59.972 03:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:12:59.972 03:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:59.972 03:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.BCwqBlT307 00:12:59.972 03:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75358 00:12:59.972 03:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75358 00:12:59.972 03:15:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:59.972 03:15:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@831 -- # '[' -z 75358 ']' 00:12:59.972 03:15:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:59.973 03:15:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:59.973 03:15:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:59.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:59.973 03:15:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:59.973 03:15:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:00.232 [2024-10-09 03:15:43.361433] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:13:00.232 [2024-10-09 03:15:43.361663] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75358 ] 00:13:00.232 [2024-10-09 03:15:43.527974] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:00.491 [2024-10-09 03:15:43.786649] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:00.750 [2024-10-09 03:15:44.015405] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:00.750 [2024-10-09 03:15:44.015560] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:01.009 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:01.009 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # return 0 00:13:01.009 03:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:01.009 03:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:01.009 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.009 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.009 BaseBdev1_malloc 00:13:01.009 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.009 03:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:01.009 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.009 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.009 true 00:13:01.009 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.009 03:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:01.009 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.009 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.009 [2024-10-09 03:15:44.267531] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:01.009 [2024-10-09 03:15:44.267678] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.009 [2024-10-09 03:15:44.267711] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:01.009 [2024-10-09 03:15:44.267742] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.009 [2024-10-09 03:15:44.270184] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.009 [2024-10-09 03:15:44.270260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:01.009 BaseBdev1 00:13:01.009 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.009 03:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:01.009 03:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:01.009 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.009 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.269 BaseBdev2_malloc 00:13:01.269 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.269 03:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:01.269 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.269 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.269 true 00:13:01.269 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.269 03:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:01.269 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.269 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.269 [2024-10-09 03:15:44.353853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:01.269 [2024-10-09 03:15:44.354010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.269 [2024-10-09 03:15:44.354047] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:01.269 [2024-10-09 03:15:44.354094] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.269 [2024-10-09 03:15:44.356959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.269 [2024-10-09 03:15:44.357053] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:01.269 BaseBdev2 00:13:01.269 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.269 03:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:01.269 03:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:01.269 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.269 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.269 BaseBdev3_malloc 00:13:01.269 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.269 03:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:01.269 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.269 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.269 true 00:13:01.269 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.269 03:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:01.269 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.269 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.269 [2024-10-09 03:15:44.428703] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:01.269 [2024-10-09 03:15:44.428871] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.269 [2024-10-09 03:15:44.428913] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:01.269 [2024-10-09 03:15:44.428948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.269 [2024-10-09 03:15:44.431715] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.269 [2024-10-09 03:15:44.431794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:01.269 BaseBdev3 00:13:01.269 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.269 03:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:01.269 03:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:01.269 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.269 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.269 BaseBdev4_malloc 00:13:01.269 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.269 03:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:01.269 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.269 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.269 true 00:13:01.269 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.269 03:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:01.269 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.269 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.269 [2024-10-09 03:15:44.504365] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:01.269 [2024-10-09 03:15:44.504552] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:01.269 [2024-10-09 03:15:44.504595] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:01.270 [2024-10-09 03:15:44.504633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:01.270 [2024-10-09 03:15:44.507552] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:01.270 [2024-10-09 03:15:44.507652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:01.270 BaseBdev4 00:13:01.270 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.270 03:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:01.270 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.270 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.270 [2024-10-09 03:15:44.516594] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:01.270 [2024-10-09 03:15:44.519253] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:01.270 [2024-10-09 03:15:44.519401] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:01.270 [2024-10-09 03:15:44.519494] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:01.270 [2024-10-09 03:15:44.519769] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:01.270 [2024-10-09 03:15:44.519820] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:01.270 [2024-10-09 03:15:44.520179] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:01.270 [2024-10-09 03:15:44.520434] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:01.270 [2024-10-09 03:15:44.520490] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:01.270 [2024-10-09 03:15:44.520815] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:01.270 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.270 03:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:01.270 03:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:01.270 03:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:01.270 03:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:01.270 03:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:01.270 03:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:01.270 03:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:01.270 03:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:01.270 03:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:01.270 03:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:01.270 03:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:01.270 03:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:01.270 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.270 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.270 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.529 03:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:01.529 "name": "raid_bdev1", 00:13:01.529 "uuid": "39192853-f9e6-4457-ab02-a517a43b2236", 00:13:01.529 "strip_size_kb": 0, 00:13:01.529 "state": "online", 00:13:01.529 "raid_level": "raid1", 00:13:01.529 "superblock": true, 00:13:01.529 "num_base_bdevs": 4, 00:13:01.529 "num_base_bdevs_discovered": 4, 00:13:01.529 "num_base_bdevs_operational": 4, 00:13:01.529 "base_bdevs_list": [ 00:13:01.529 { 00:13:01.529 "name": "BaseBdev1", 00:13:01.529 "uuid": "be96cc78-24f5-5765-9040-7acb093dd043", 00:13:01.529 "is_configured": true, 00:13:01.529 "data_offset": 2048, 00:13:01.529 "data_size": 63488 00:13:01.529 }, 00:13:01.529 { 00:13:01.529 "name": "BaseBdev2", 00:13:01.529 "uuid": "f910ae58-dd47-5247-b44b-316f2f065ab8", 00:13:01.529 "is_configured": true, 00:13:01.529 "data_offset": 2048, 00:13:01.529 "data_size": 63488 00:13:01.529 }, 00:13:01.529 { 00:13:01.529 "name": "BaseBdev3", 00:13:01.529 "uuid": "bbccd74c-53ed-57a1-9395-ff942fc52315", 00:13:01.529 "is_configured": true, 00:13:01.529 "data_offset": 2048, 00:13:01.529 "data_size": 63488 00:13:01.529 }, 00:13:01.529 { 00:13:01.529 "name": "BaseBdev4", 00:13:01.529 "uuid": "7f211131-927e-5ed0-a8ef-4c08bfed12cd", 00:13:01.529 "is_configured": true, 00:13:01.529 "data_offset": 2048, 00:13:01.529 "data_size": 63488 00:13:01.529 } 00:13:01.529 ] 00:13:01.529 }' 00:13:01.529 03:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:01.529 03:15:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:01.789 03:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:01.789 03:15:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:01.789 [2024-10-09 03:15:45.033362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:02.729 03:15:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:02.729 03:15:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.729 03:15:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.729 [2024-10-09 03:15:45.951453] bdev_raid.c:2272:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:13:02.729 [2024-10-09 03:15:45.951537] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:02.729 [2024-10-09 03:15:45.951779] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:02.729 03:15:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.729 03:15:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:02.729 03:15:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:02.729 03:15:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:13:02.730 03:15:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:13:02.730 03:15:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:02.730 03:15:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:02.730 03:15:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:02.730 03:15:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:02.730 03:15:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:02.730 03:15:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:02.730 03:15:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:02.730 03:15:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:02.730 03:15:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:02.730 03:15:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:02.730 03:15:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:02.730 03:15:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:02.730 03:15:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:02.730 03:15:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:02.730 03:15:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:02.730 03:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:02.730 "name": "raid_bdev1", 00:13:02.730 "uuid": "39192853-f9e6-4457-ab02-a517a43b2236", 00:13:02.730 "strip_size_kb": 0, 00:13:02.730 "state": "online", 00:13:02.730 "raid_level": "raid1", 00:13:02.730 "superblock": true, 00:13:02.730 "num_base_bdevs": 4, 00:13:02.730 "num_base_bdevs_discovered": 3, 00:13:02.730 "num_base_bdevs_operational": 3, 00:13:02.730 "base_bdevs_list": [ 00:13:02.730 { 00:13:02.730 "name": null, 00:13:02.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:02.731 "is_configured": false, 00:13:02.731 "data_offset": 0, 00:13:02.731 "data_size": 63488 00:13:02.731 }, 00:13:02.731 { 00:13:02.731 "name": "BaseBdev2", 00:13:02.731 "uuid": "f910ae58-dd47-5247-b44b-316f2f065ab8", 00:13:02.731 "is_configured": true, 00:13:02.731 "data_offset": 2048, 00:13:02.731 "data_size": 63488 00:13:02.731 }, 00:13:02.731 { 00:13:02.731 "name": "BaseBdev3", 00:13:02.731 "uuid": "bbccd74c-53ed-57a1-9395-ff942fc52315", 00:13:02.731 "is_configured": true, 00:13:02.731 "data_offset": 2048, 00:13:02.731 "data_size": 63488 00:13:02.731 }, 00:13:02.731 { 00:13:02.731 "name": "BaseBdev4", 00:13:02.731 "uuid": "7f211131-927e-5ed0-a8ef-4c08bfed12cd", 00:13:02.731 "is_configured": true, 00:13:02.731 "data_offset": 2048, 00:13:02.731 "data_size": 63488 00:13:02.731 } 00:13:02.731 ] 00:13:02.731 }' 00:13:02.731 03:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:02.731 03:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.313 03:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:03.313 03:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.313 03:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.313 [2024-10-09 03:15:46.409145] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:03.313 [2024-10-09 03:15:46.409276] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:03.313 [2024-10-09 03:15:46.411973] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:03.313 [2024-10-09 03:15:46.412066] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:03.313 [2024-10-09 03:15:46.412208] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:03.313 [2024-10-09 03:15:46.412251] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:03.313 { 00:13:03.313 "results": [ 00:13:03.313 { 00:13:03.313 "job": "raid_bdev1", 00:13:03.313 "core_mask": "0x1", 00:13:03.313 "workload": "randrw", 00:13:03.313 "percentage": 50, 00:13:03.313 "status": "finished", 00:13:03.313 "queue_depth": 1, 00:13:03.313 "io_size": 131072, 00:13:03.313 "runtime": 1.376388, 00:13:03.313 "iops": 9000.369081973979, 00:13:03.313 "mibps": 1125.0461352467473, 00:13:03.313 "io_failed": 0, 00:13:03.313 "io_timeout": 0, 00:13:03.313 "avg_latency_us": 108.57292689220306, 00:13:03.313 "min_latency_us": 22.805240174672488, 00:13:03.313 "max_latency_us": 1459.5353711790392 00:13:03.313 } 00:13:03.313 ], 00:13:03.313 "core_count": 1 00:13:03.313 } 00:13:03.313 03:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.313 03:15:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75358 00:13:03.313 03:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@950 -- # '[' -z 75358 ']' 00:13:03.313 03:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # kill -0 75358 00:13:03.313 03:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # uname 00:13:03.313 03:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:03.313 03:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75358 00:13:03.313 killing process with pid 75358 00:13:03.314 03:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:03.314 03:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:03.314 03:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75358' 00:13:03.314 03:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@969 -- # kill 75358 00:13:03.314 [2024-10-09 03:15:46.441826] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:03.314 03:15:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@974 -- # wait 75358 00:13:03.573 [2024-10-09 03:15:46.788169] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:04.951 03:15:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:04.951 03:15:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.BCwqBlT307 00:13:04.951 03:15:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:04.951 03:15:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:04.951 03:15:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:04.951 03:15:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:04.951 03:15:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:04.951 03:15:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:04.951 00:13:04.951 real 0m5.006s 00:13:04.951 user 0m5.685s 00:13:04.952 sys 0m0.706s 00:13:04.952 03:15:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:04.952 03:15:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.952 ************************************ 00:13:04.952 END TEST raid_write_error_test 00:13:04.952 ************************************ 00:13:05.210 03:15:48 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:13:05.210 03:15:48 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:13:05.210 03:15:48 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:13:05.210 03:15:48 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:05.210 03:15:48 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:05.210 03:15:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:05.210 ************************************ 00:13:05.210 START TEST raid_rebuild_test 00:13:05.210 ************************************ 00:13:05.210 03:15:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false false true 00:13:05.210 03:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:05.210 03:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:05.210 03:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:05.210 03:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:05.210 03:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:05.210 03:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:05.210 03:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:05.210 03:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:05.210 03:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:05.210 03:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:05.210 03:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:05.210 03:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:05.210 03:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:05.210 03:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:05.210 03:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:05.210 03:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:05.210 03:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:05.210 03:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:05.210 03:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:05.210 03:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:05.210 03:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:05.210 03:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:05.210 03:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:05.210 03:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75536 00:13:05.210 03:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:05.210 03:15:48 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75536 00:13:05.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:05.210 03:15:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 75536 ']' 00:13:05.210 03:15:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:05.210 03:15:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:05.210 03:15:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:05.210 03:15:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:05.210 03:15:48 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.210 [2024-10-09 03:15:48.412762] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:13:05.210 [2024-10-09 03:15:48.413051] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75536 ] 00:13:05.210 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:05.210 Zero copy mechanism will not be used. 00:13:05.469 [2024-10-09 03:15:48.583540] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:05.729 [2024-10-09 03:15:48.827928] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:05.988 [2024-10-09 03:15:49.061211] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:05.988 [2024-10-09 03:15:49.061343] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:06.248 03:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:06.248 03:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:13:06.248 03:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:06.248 03:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:06.248 03:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.248 03:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.248 BaseBdev1_malloc 00:13:06.248 03:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.248 03:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:06.248 03:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.248 03:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.248 [2024-10-09 03:15:49.361319] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:06.248 [2024-10-09 03:15:49.361482] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.248 [2024-10-09 03:15:49.361536] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:06.248 [2024-10-09 03:15:49.361577] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.248 [2024-10-09 03:15:49.363919] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.248 [2024-10-09 03:15:49.363992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:06.248 BaseBdev1 00:13:06.248 03:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.248 03:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:06.248 03:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:06.248 03:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.248 03:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.248 BaseBdev2_malloc 00:13:06.248 03:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.248 03:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:06.248 03:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.248 03:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.248 [2024-10-09 03:15:49.433883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:06.248 [2024-10-09 03:15:49.434002] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.248 [2024-10-09 03:15:49.434046] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:06.248 [2024-10-09 03:15:49.434081] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.248 [2024-10-09 03:15:49.436353] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.248 [2024-10-09 03:15:49.436425] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:06.248 BaseBdev2 00:13:06.248 03:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.248 03:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:06.248 03:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.248 03:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.248 spare_malloc 00:13:06.248 03:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.248 03:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:06.248 03:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.248 03:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.248 spare_delay 00:13:06.248 03:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.248 03:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:06.248 03:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.248 03:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.248 [2024-10-09 03:15:49.503440] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:06.248 [2024-10-09 03:15:49.503559] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.248 [2024-10-09 03:15:49.503605] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:06.248 [2024-10-09 03:15:49.503635] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.248 [2024-10-09 03:15:49.505924] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.248 [2024-10-09 03:15:49.505997] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:06.248 spare 00:13:06.249 03:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.249 03:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:06.249 03:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.249 03:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.249 [2024-10-09 03:15:49.515471] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:06.249 [2024-10-09 03:15:49.517522] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:06.249 [2024-10-09 03:15:49.517644] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:06.249 [2024-10-09 03:15:49.517686] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:06.249 [2024-10-09 03:15:49.517977] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:06.249 [2024-10-09 03:15:49.518167] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:06.249 [2024-10-09 03:15:49.518204] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:06.249 [2024-10-09 03:15:49.518381] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.249 03:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.249 03:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:06.249 03:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.249 03:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.249 03:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:06.249 03:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:06.249 03:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:06.249 03:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.249 03:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.249 03:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.249 03:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.249 03:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.249 03:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.249 03:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.249 03:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.249 03:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.508 03:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.508 "name": "raid_bdev1", 00:13:06.508 "uuid": "2bfea1d8-ddb3-4c97-85ba-9be2800943a3", 00:13:06.508 "strip_size_kb": 0, 00:13:06.508 "state": "online", 00:13:06.508 "raid_level": "raid1", 00:13:06.508 "superblock": false, 00:13:06.508 "num_base_bdevs": 2, 00:13:06.508 "num_base_bdevs_discovered": 2, 00:13:06.508 "num_base_bdevs_operational": 2, 00:13:06.508 "base_bdevs_list": [ 00:13:06.508 { 00:13:06.508 "name": "BaseBdev1", 00:13:06.508 "uuid": "0bbb7950-8b72-52b5-b2c4-260e1aa35762", 00:13:06.508 "is_configured": true, 00:13:06.508 "data_offset": 0, 00:13:06.508 "data_size": 65536 00:13:06.508 }, 00:13:06.508 { 00:13:06.508 "name": "BaseBdev2", 00:13:06.508 "uuid": "6cb14775-05c0-5504-997c-93b1b08a946a", 00:13:06.508 "is_configured": true, 00:13:06.508 "data_offset": 0, 00:13:06.508 "data_size": 65536 00:13:06.508 } 00:13:06.508 ] 00:13:06.508 }' 00:13:06.508 03:15:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.508 03:15:49 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.768 03:15:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:06.768 03:15:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:06.768 03:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.768 03:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.768 [2024-10-09 03:15:50.015018] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:06.768 03:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.768 03:15:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:06.768 03:15:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.768 03:15:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:06.768 03:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.768 03:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.768 03:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.027 03:15:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:07.027 03:15:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:07.027 03:15:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:07.027 03:15:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:07.027 03:15:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:07.027 03:15:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:07.027 03:15:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:07.027 03:15:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:07.027 03:15:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:07.027 03:15:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:07.027 03:15:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:07.027 03:15:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:07.027 03:15:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:07.027 03:15:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:07.027 [2024-10-09 03:15:50.282311] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:07.027 /dev/nbd0 00:13:07.027 03:15:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:07.027 03:15:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:07.027 03:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:07.027 03:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:07.027 03:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:07.027 03:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:07.027 03:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:07.027 03:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:07.027 03:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:07.027 03:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:07.287 03:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:07.287 1+0 records in 00:13:07.287 1+0 records out 00:13:07.287 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000306954 s, 13.3 MB/s 00:13:07.287 03:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.287 03:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:07.287 03:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:07.287 03:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:07.287 03:15:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:07.287 03:15:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:07.287 03:15:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:07.287 03:15:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:07.287 03:15:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:07.287 03:15:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:13:11.495 65536+0 records in 00:13:11.495 65536+0 records out 00:13:11.495 33554432 bytes (34 MB, 32 MiB) copied, 4.13122 s, 8.1 MB/s 00:13:11.495 03:15:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:11.495 03:15:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:11.495 03:15:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:11.495 03:15:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:11.495 03:15:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:11.495 03:15:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:11.495 03:15:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:11.495 03:15:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:11.495 [2024-10-09 03:15:54.715561] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.495 03:15:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:11.495 03:15:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:11.495 03:15:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:11.495 03:15:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:11.496 03:15:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:11.496 03:15:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:11.496 03:15:54 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:11.496 03:15:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:11.496 03:15:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.496 03:15:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.496 [2024-10-09 03:15:54.729643] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:11.496 03:15:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.496 03:15:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:11.496 03:15:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:11.496 03:15:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:11.496 03:15:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:11.496 03:15:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:11.496 03:15:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:11.496 03:15:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:11.496 03:15:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:11.496 03:15:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:11.496 03:15:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:11.496 03:15:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:11.496 03:15:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:11.496 03:15:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:11.496 03:15:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:11.496 03:15:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:11.496 03:15:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:11.496 "name": "raid_bdev1", 00:13:11.496 "uuid": "2bfea1d8-ddb3-4c97-85ba-9be2800943a3", 00:13:11.496 "strip_size_kb": 0, 00:13:11.496 "state": "online", 00:13:11.496 "raid_level": "raid1", 00:13:11.496 "superblock": false, 00:13:11.496 "num_base_bdevs": 2, 00:13:11.496 "num_base_bdevs_discovered": 1, 00:13:11.496 "num_base_bdevs_operational": 1, 00:13:11.496 "base_bdevs_list": [ 00:13:11.496 { 00:13:11.496 "name": null, 00:13:11.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:11.496 "is_configured": false, 00:13:11.496 "data_offset": 0, 00:13:11.496 "data_size": 65536 00:13:11.496 }, 00:13:11.496 { 00:13:11.496 "name": "BaseBdev2", 00:13:11.496 "uuid": "6cb14775-05c0-5504-997c-93b1b08a946a", 00:13:11.496 "is_configured": true, 00:13:11.496 "data_offset": 0, 00:13:11.496 "data_size": 65536 00:13:11.496 } 00:13:11.496 ] 00:13:11.496 }' 00:13:11.496 03:15:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:11.496 03:15:54 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.065 03:15:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:12.065 03:15:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.065 03:15:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.065 [2024-10-09 03:15:55.161076] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:12.065 [2024-10-09 03:15:55.176076] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:13:12.065 03:15:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.065 03:15:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:12.065 [2024-10-09 03:15:55.178253] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:13.004 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:13.004 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.004 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:13.004 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:13.004 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.004 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.004 03:15:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.004 03:15:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.004 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.004 03:15:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.004 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.004 "name": "raid_bdev1", 00:13:13.004 "uuid": "2bfea1d8-ddb3-4c97-85ba-9be2800943a3", 00:13:13.004 "strip_size_kb": 0, 00:13:13.004 "state": "online", 00:13:13.004 "raid_level": "raid1", 00:13:13.004 "superblock": false, 00:13:13.004 "num_base_bdevs": 2, 00:13:13.004 "num_base_bdevs_discovered": 2, 00:13:13.004 "num_base_bdevs_operational": 2, 00:13:13.004 "process": { 00:13:13.004 "type": "rebuild", 00:13:13.004 "target": "spare", 00:13:13.004 "progress": { 00:13:13.004 "blocks": 20480, 00:13:13.004 "percent": 31 00:13:13.004 } 00:13:13.004 }, 00:13:13.004 "base_bdevs_list": [ 00:13:13.004 { 00:13:13.004 "name": "spare", 00:13:13.004 "uuid": "0ef09df0-31c0-521f-8201-deeebf46ade4", 00:13:13.004 "is_configured": true, 00:13:13.004 "data_offset": 0, 00:13:13.004 "data_size": 65536 00:13:13.004 }, 00:13:13.004 { 00:13:13.004 "name": "BaseBdev2", 00:13:13.004 "uuid": "6cb14775-05c0-5504-997c-93b1b08a946a", 00:13:13.004 "is_configured": true, 00:13:13.004 "data_offset": 0, 00:13:13.004 "data_size": 65536 00:13:13.004 } 00:13:13.004 ] 00:13:13.005 }' 00:13:13.005 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.005 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:13.005 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.264 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:13.264 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:13.264 03:15:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.264 03:15:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.264 [2024-10-09 03:15:56.338136] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:13.265 [2024-10-09 03:15:56.387818] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:13.265 [2024-10-09 03:15:56.387960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:13.265 [2024-10-09 03:15:56.387997] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:13.265 [2024-10-09 03:15:56.388022] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:13.265 03:15:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.265 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:13.265 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:13.265 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:13.265 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:13.265 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:13.265 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:13.265 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:13.265 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:13.265 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:13.265 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:13.265 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.265 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.265 03:15:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.265 03:15:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.265 03:15:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.265 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:13.265 "name": "raid_bdev1", 00:13:13.265 "uuid": "2bfea1d8-ddb3-4c97-85ba-9be2800943a3", 00:13:13.265 "strip_size_kb": 0, 00:13:13.265 "state": "online", 00:13:13.265 "raid_level": "raid1", 00:13:13.265 "superblock": false, 00:13:13.265 "num_base_bdevs": 2, 00:13:13.265 "num_base_bdevs_discovered": 1, 00:13:13.265 "num_base_bdevs_operational": 1, 00:13:13.265 "base_bdevs_list": [ 00:13:13.265 { 00:13:13.265 "name": null, 00:13:13.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.265 "is_configured": false, 00:13:13.265 "data_offset": 0, 00:13:13.265 "data_size": 65536 00:13:13.265 }, 00:13:13.265 { 00:13:13.265 "name": "BaseBdev2", 00:13:13.265 "uuid": "6cb14775-05c0-5504-997c-93b1b08a946a", 00:13:13.265 "is_configured": true, 00:13:13.265 "data_offset": 0, 00:13:13.265 "data_size": 65536 00:13:13.265 } 00:13:13.265 ] 00:13:13.265 }' 00:13:13.265 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:13.265 03:15:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.834 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:13.834 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:13.834 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:13.834 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:13.834 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:13.834 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:13.834 03:15:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.834 03:15:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.834 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:13.834 03:15:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.834 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:13.834 "name": "raid_bdev1", 00:13:13.834 "uuid": "2bfea1d8-ddb3-4c97-85ba-9be2800943a3", 00:13:13.834 "strip_size_kb": 0, 00:13:13.834 "state": "online", 00:13:13.834 "raid_level": "raid1", 00:13:13.834 "superblock": false, 00:13:13.834 "num_base_bdevs": 2, 00:13:13.834 "num_base_bdevs_discovered": 1, 00:13:13.834 "num_base_bdevs_operational": 1, 00:13:13.834 "base_bdevs_list": [ 00:13:13.834 { 00:13:13.834 "name": null, 00:13:13.834 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:13.834 "is_configured": false, 00:13:13.834 "data_offset": 0, 00:13:13.834 "data_size": 65536 00:13:13.834 }, 00:13:13.834 { 00:13:13.834 "name": "BaseBdev2", 00:13:13.834 "uuid": "6cb14775-05c0-5504-997c-93b1b08a946a", 00:13:13.834 "is_configured": true, 00:13:13.834 "data_offset": 0, 00:13:13.834 "data_size": 65536 00:13:13.834 } 00:13:13.834 ] 00:13:13.834 }' 00:13:13.834 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:13.834 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:13.834 03:15:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:13.834 03:15:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:13.834 03:15:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:13.834 03:15:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:13.834 03:15:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.834 [2024-10-09 03:15:57.049826] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:13.834 [2024-10-09 03:15:57.066686] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:13:13.834 03:15:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:13.834 03:15:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:13.834 [2024-10-09 03:15:57.068817] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:14.822 03:15:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:14.822 03:15:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:14.822 03:15:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:14.822 03:15:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:14.822 03:15:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:14.823 03:15:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:14.823 03:15:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:14.823 03:15:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.823 03:15:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.823 03:15:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.082 03:15:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.082 "name": "raid_bdev1", 00:13:15.082 "uuid": "2bfea1d8-ddb3-4c97-85ba-9be2800943a3", 00:13:15.082 "strip_size_kb": 0, 00:13:15.082 "state": "online", 00:13:15.082 "raid_level": "raid1", 00:13:15.082 "superblock": false, 00:13:15.082 "num_base_bdevs": 2, 00:13:15.082 "num_base_bdevs_discovered": 2, 00:13:15.082 "num_base_bdevs_operational": 2, 00:13:15.082 "process": { 00:13:15.082 "type": "rebuild", 00:13:15.082 "target": "spare", 00:13:15.082 "progress": { 00:13:15.082 "blocks": 20480, 00:13:15.082 "percent": 31 00:13:15.082 } 00:13:15.082 }, 00:13:15.082 "base_bdevs_list": [ 00:13:15.082 { 00:13:15.082 "name": "spare", 00:13:15.082 "uuid": "0ef09df0-31c0-521f-8201-deeebf46ade4", 00:13:15.082 "is_configured": true, 00:13:15.082 "data_offset": 0, 00:13:15.082 "data_size": 65536 00:13:15.082 }, 00:13:15.082 { 00:13:15.082 "name": "BaseBdev2", 00:13:15.082 "uuid": "6cb14775-05c0-5504-997c-93b1b08a946a", 00:13:15.082 "is_configured": true, 00:13:15.082 "data_offset": 0, 00:13:15.082 "data_size": 65536 00:13:15.082 } 00:13:15.082 ] 00:13:15.082 }' 00:13:15.082 03:15:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.082 03:15:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:15.082 03:15:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.082 03:15:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:15.082 03:15:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:15.082 03:15:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:15.082 03:15:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:15.082 03:15:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:15.082 03:15:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=385 00:13:15.082 03:15:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:15.082 03:15:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:15.082 03:15:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:15.082 03:15:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:15.082 03:15:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:15.082 03:15:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:15.082 03:15:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.082 03:15:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.082 03:15:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.083 03:15:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.083 03:15:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.083 03:15:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:15.083 "name": "raid_bdev1", 00:13:15.083 "uuid": "2bfea1d8-ddb3-4c97-85ba-9be2800943a3", 00:13:15.083 "strip_size_kb": 0, 00:13:15.083 "state": "online", 00:13:15.083 "raid_level": "raid1", 00:13:15.083 "superblock": false, 00:13:15.083 "num_base_bdevs": 2, 00:13:15.083 "num_base_bdevs_discovered": 2, 00:13:15.083 "num_base_bdevs_operational": 2, 00:13:15.083 "process": { 00:13:15.083 "type": "rebuild", 00:13:15.083 "target": "spare", 00:13:15.083 "progress": { 00:13:15.083 "blocks": 22528, 00:13:15.083 "percent": 34 00:13:15.083 } 00:13:15.083 }, 00:13:15.083 "base_bdevs_list": [ 00:13:15.083 { 00:13:15.083 "name": "spare", 00:13:15.083 "uuid": "0ef09df0-31c0-521f-8201-deeebf46ade4", 00:13:15.083 "is_configured": true, 00:13:15.083 "data_offset": 0, 00:13:15.083 "data_size": 65536 00:13:15.083 }, 00:13:15.083 { 00:13:15.083 "name": "BaseBdev2", 00:13:15.083 "uuid": "6cb14775-05c0-5504-997c-93b1b08a946a", 00:13:15.083 "is_configured": true, 00:13:15.083 "data_offset": 0, 00:13:15.083 "data_size": 65536 00:13:15.083 } 00:13:15.083 ] 00:13:15.083 }' 00:13:15.083 03:15:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:15.083 03:15:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:15.083 03:15:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:15.083 03:15:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:15.083 03:15:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:16.461 03:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:16.461 03:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:16.461 03:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:16.461 03:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:16.461 03:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:16.461 03:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:16.461 03:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.461 03:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.461 03:15:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.461 03:15:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.461 03:15:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.461 03:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:16.461 "name": "raid_bdev1", 00:13:16.461 "uuid": "2bfea1d8-ddb3-4c97-85ba-9be2800943a3", 00:13:16.461 "strip_size_kb": 0, 00:13:16.461 "state": "online", 00:13:16.461 "raid_level": "raid1", 00:13:16.461 "superblock": false, 00:13:16.461 "num_base_bdevs": 2, 00:13:16.461 "num_base_bdevs_discovered": 2, 00:13:16.461 "num_base_bdevs_operational": 2, 00:13:16.461 "process": { 00:13:16.461 "type": "rebuild", 00:13:16.461 "target": "spare", 00:13:16.461 "progress": { 00:13:16.461 "blocks": 45056, 00:13:16.461 "percent": 68 00:13:16.461 } 00:13:16.461 }, 00:13:16.461 "base_bdevs_list": [ 00:13:16.461 { 00:13:16.461 "name": "spare", 00:13:16.461 "uuid": "0ef09df0-31c0-521f-8201-deeebf46ade4", 00:13:16.461 "is_configured": true, 00:13:16.461 "data_offset": 0, 00:13:16.461 "data_size": 65536 00:13:16.461 }, 00:13:16.461 { 00:13:16.461 "name": "BaseBdev2", 00:13:16.461 "uuid": "6cb14775-05c0-5504-997c-93b1b08a946a", 00:13:16.461 "is_configured": true, 00:13:16.461 "data_offset": 0, 00:13:16.461 "data_size": 65536 00:13:16.461 } 00:13:16.461 ] 00:13:16.461 }' 00:13:16.462 03:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:16.462 03:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:16.462 03:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:16.462 03:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:16.462 03:15:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:17.030 [2024-10-09 03:16:00.291792] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:17.030 [2024-10-09 03:16:00.291998] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:17.030 [2024-10-09 03:16:00.292069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.289 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:17.289 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:17.289 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:17.289 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:17.289 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:17.289 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:17.289 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.289 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.289 03:16:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.289 03:16:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.289 03:16:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.289 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:17.289 "name": "raid_bdev1", 00:13:17.289 "uuid": "2bfea1d8-ddb3-4c97-85ba-9be2800943a3", 00:13:17.289 "strip_size_kb": 0, 00:13:17.289 "state": "online", 00:13:17.289 "raid_level": "raid1", 00:13:17.289 "superblock": false, 00:13:17.289 "num_base_bdevs": 2, 00:13:17.289 "num_base_bdevs_discovered": 2, 00:13:17.289 "num_base_bdevs_operational": 2, 00:13:17.289 "base_bdevs_list": [ 00:13:17.289 { 00:13:17.289 "name": "spare", 00:13:17.289 "uuid": "0ef09df0-31c0-521f-8201-deeebf46ade4", 00:13:17.289 "is_configured": true, 00:13:17.289 "data_offset": 0, 00:13:17.289 "data_size": 65536 00:13:17.289 }, 00:13:17.289 { 00:13:17.289 "name": "BaseBdev2", 00:13:17.289 "uuid": "6cb14775-05c0-5504-997c-93b1b08a946a", 00:13:17.289 "is_configured": true, 00:13:17.289 "data_offset": 0, 00:13:17.289 "data_size": 65536 00:13:17.289 } 00:13:17.289 ] 00:13:17.289 }' 00:13:17.289 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:17.548 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:17.548 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:17.549 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:17.549 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:13:17.549 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:17.549 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:17.549 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:17.549 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:17.549 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:17.549 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.549 03:16:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.549 03:16:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.549 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.549 03:16:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.549 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:17.549 "name": "raid_bdev1", 00:13:17.549 "uuid": "2bfea1d8-ddb3-4c97-85ba-9be2800943a3", 00:13:17.549 "strip_size_kb": 0, 00:13:17.549 "state": "online", 00:13:17.549 "raid_level": "raid1", 00:13:17.549 "superblock": false, 00:13:17.549 "num_base_bdevs": 2, 00:13:17.549 "num_base_bdevs_discovered": 2, 00:13:17.549 "num_base_bdevs_operational": 2, 00:13:17.549 "base_bdevs_list": [ 00:13:17.549 { 00:13:17.549 "name": "spare", 00:13:17.549 "uuid": "0ef09df0-31c0-521f-8201-deeebf46ade4", 00:13:17.549 "is_configured": true, 00:13:17.549 "data_offset": 0, 00:13:17.549 "data_size": 65536 00:13:17.549 }, 00:13:17.549 { 00:13:17.549 "name": "BaseBdev2", 00:13:17.549 "uuid": "6cb14775-05c0-5504-997c-93b1b08a946a", 00:13:17.549 "is_configured": true, 00:13:17.549 "data_offset": 0, 00:13:17.549 "data_size": 65536 00:13:17.549 } 00:13:17.549 ] 00:13:17.549 }' 00:13:17.549 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:17.549 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:17.549 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:17.549 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:17.549 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:17.549 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:17.549 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:17.549 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:17.549 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:17.549 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:17.549 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:17.549 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:17.549 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:17.549 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:17.549 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:17.549 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:17.549 03:16:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:17.549 03:16:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.549 03:16:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:17.549 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:17.549 "name": "raid_bdev1", 00:13:17.549 "uuid": "2bfea1d8-ddb3-4c97-85ba-9be2800943a3", 00:13:17.549 "strip_size_kb": 0, 00:13:17.549 "state": "online", 00:13:17.549 "raid_level": "raid1", 00:13:17.549 "superblock": false, 00:13:17.549 "num_base_bdevs": 2, 00:13:17.549 "num_base_bdevs_discovered": 2, 00:13:17.549 "num_base_bdevs_operational": 2, 00:13:17.549 "base_bdevs_list": [ 00:13:17.549 { 00:13:17.549 "name": "spare", 00:13:17.549 "uuid": "0ef09df0-31c0-521f-8201-deeebf46ade4", 00:13:17.549 "is_configured": true, 00:13:17.549 "data_offset": 0, 00:13:17.549 "data_size": 65536 00:13:17.549 }, 00:13:17.549 { 00:13:17.549 "name": "BaseBdev2", 00:13:17.549 "uuid": "6cb14775-05c0-5504-997c-93b1b08a946a", 00:13:17.549 "is_configured": true, 00:13:17.549 "data_offset": 0, 00:13:17.549 "data_size": 65536 00:13:17.549 } 00:13:17.549 ] 00:13:17.549 }' 00:13:17.549 03:16:00 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:17.549 03:16:00 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.117 03:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:18.117 03:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.117 03:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.117 [2024-10-09 03:16:01.224669] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:18.117 [2024-10-09 03:16:01.224806] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:18.117 [2024-10-09 03:16:01.224939] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:18.117 [2024-10-09 03:16:01.225034] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:18.117 [2024-10-09 03:16:01.225103] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:18.117 03:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.117 03:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:13:18.117 03:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:18.117 03:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:18.117 03:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:18.117 03:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:18.117 03:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:18.117 03:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:18.117 03:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:18.117 03:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:18.117 03:16:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:18.117 03:16:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:18.117 03:16:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:18.117 03:16:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:18.117 03:16:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:18.117 03:16:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:13:18.117 03:16:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:18.117 03:16:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:18.117 03:16:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:18.375 /dev/nbd0 00:13:18.375 03:16:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:18.375 03:16:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:18.375 03:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:18.375 03:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:18.375 03:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:18.375 03:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:18.375 03:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:18.375 03:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:18.375 03:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:18.375 03:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:18.376 03:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:18.376 1+0 records in 00:13:18.376 1+0 records out 00:13:18.376 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000514839 s, 8.0 MB/s 00:13:18.376 03:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.376 03:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:18.376 03:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.376 03:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:18.376 03:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:18.376 03:16:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:18.376 03:16:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:18.376 03:16:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:18.634 /dev/nbd1 00:13:18.634 03:16:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:18.634 03:16:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:18.634 03:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:18.634 03:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:13:18.634 03:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:18.634 03:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:18.634 03:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:18.634 03:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:13:18.634 03:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:18.634 03:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:18.634 03:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:18.634 1+0 records in 00:13:18.634 1+0 records out 00:13:18.634 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000471786 s, 8.7 MB/s 00:13:18.634 03:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.634 03:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:13:18.634 03:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:18.634 03:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:18.635 03:16:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:13:18.635 03:16:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:18.635 03:16:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:18.635 03:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:18.893 03:16:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:18.893 03:16:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:18.893 03:16:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:18.893 03:16:01 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:18.894 03:16:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:13:18.894 03:16:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:18.894 03:16:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:19.152 03:16:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:19.152 03:16:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:19.152 03:16:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:19.152 03:16:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:19.152 03:16:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:19.152 03:16:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:19.152 03:16:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:19.152 03:16:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:19.152 03:16:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:19.152 03:16:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:19.152 03:16:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:19.152 03:16:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:19.152 03:16:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:19.152 03:16:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:19.152 03:16:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:19.152 03:16:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:19.152 03:16:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:13:19.152 03:16:02 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:13:19.152 03:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:19.153 03:16:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75536 00:13:19.153 03:16:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 75536 ']' 00:13:19.153 03:16:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 75536 00:13:19.153 03:16:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:13:19.153 03:16:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:19.153 03:16:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75536 00:13:19.412 03:16:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:19.412 03:16:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:19.412 killing process with pid 75536 00:13:19.412 03:16:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75536' 00:13:19.412 03:16:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 75536 00:13:19.412 Received shutdown signal, test time was about 60.000000 seconds 00:13:19.412 00:13:19.412 Latency(us) 00:13:19.412 [2024-10-09T03:16:02.715Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:19.412 [2024-10-09T03:16:02.715Z] =================================================================================================================== 00:13:19.412 [2024-10-09T03:16:02.715Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:19.412 [2024-10-09 03:16:02.472159] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:19.412 03:16:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 75536 00:13:19.671 [2024-10-09 03:16:02.795318] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:21.081 03:16:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:13:21.081 00:13:21.081 real 0m15.836s 00:13:21.081 user 0m17.734s 00:13:21.081 sys 0m3.370s 00:13:21.081 03:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:21.081 03:16:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.081 ************************************ 00:13:21.081 END TEST raid_rebuild_test 00:13:21.081 ************************************ 00:13:21.081 03:16:04 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:13:21.081 03:16:04 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:21.081 03:16:04 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:21.081 03:16:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:21.081 ************************************ 00:13:21.081 START TEST raid_rebuild_test_sb 00:13:21.081 ************************************ 00:13:21.081 03:16:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:13:21.081 03:16:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:21.081 03:16:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:21.081 03:16:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:21.081 03:16:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:13:21.081 03:16:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:21.081 03:16:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:21.081 03:16:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:21.081 03:16:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:21.081 03:16:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:21.081 03:16:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:21.081 03:16:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:21.081 03:16:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:21.081 03:16:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:21.081 03:16:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:21.081 03:16:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:21.081 03:16:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:21.081 03:16:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:21.081 03:16:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:21.081 03:16:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:21.081 03:16:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:21.081 03:16:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:21.081 03:16:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:21.081 03:16:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:21.081 03:16:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:21.081 03:16:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75973 00:13:21.081 03:16:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75973 00:13:21.081 03:16:04 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:21.081 03:16:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 75973 ']' 00:13:21.081 03:16:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:21.081 03:16:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:21.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:21.081 03:16:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:21.081 03:16:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:21.081 03:16:04 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.081 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:21.081 Zero copy mechanism will not be used. 00:13:21.081 [2024-10-09 03:16:04.308783] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:13:21.081 [2024-10-09 03:16:04.308925] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75973 ] 00:13:21.340 [2024-10-09 03:16:04.458022] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:21.599 [2024-10-09 03:16:04.716507] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.859 [2024-10-09 03:16:04.946492] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:21.859 [2024-10-09 03:16:04.946537] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:21.859 03:16:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:21.859 03:16:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:13:21.859 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:21.859 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:21.859 03:16:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.859 03:16:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.118 BaseBdev1_malloc 00:13:22.118 03:16:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.118 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:22.118 03:16:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.118 03:16:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.118 [2024-10-09 03:16:05.187526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:22.118 [2024-10-09 03:16:05.187602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.118 [2024-10-09 03:16:05.187627] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:22.118 [2024-10-09 03:16:05.187643] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.118 [2024-10-09 03:16:05.190054] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.118 [2024-10-09 03:16:05.190091] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:22.118 BaseBdev1 00:13:22.118 03:16:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.118 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:22.118 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:22.118 03:16:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.118 03:16:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.118 BaseBdev2_malloc 00:13:22.118 03:16:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.118 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:22.118 03:16:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.118 03:16:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.118 [2024-10-09 03:16:05.263265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:22.118 [2024-10-09 03:16:05.263329] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.118 [2024-10-09 03:16:05.263349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:22.118 [2024-10-09 03:16:05.263360] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.118 [2024-10-09 03:16:05.265713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.118 [2024-10-09 03:16:05.265750] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:22.118 BaseBdev2 00:13:22.118 03:16:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.118 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:22.118 03:16:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.118 03:16:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.118 spare_malloc 00:13:22.118 03:16:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.118 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:22.118 03:16:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.118 03:16:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.118 spare_delay 00:13:22.118 03:16:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.118 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:22.118 03:16:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.118 03:16:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.118 [2024-10-09 03:16:05.336612] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:22.118 [2024-10-09 03:16:05.336710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:22.118 [2024-10-09 03:16:05.336739] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:22.118 [2024-10-09 03:16:05.336752] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:22.118 [2024-10-09 03:16:05.339428] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:22.118 [2024-10-09 03:16:05.339473] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:22.118 spare 00:13:22.118 03:16:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.118 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:22.118 03:16:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.118 03:16:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.118 [2024-10-09 03:16:05.348660] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:22.118 [2024-10-09 03:16:05.350909] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:22.118 [2024-10-09 03:16:05.351120] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:22.118 [2024-10-09 03:16:05.351143] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:22.118 [2024-10-09 03:16:05.351480] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:22.118 [2024-10-09 03:16:05.351715] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:22.118 [2024-10-09 03:16:05.351731] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:22.118 [2024-10-09 03:16:05.351965] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:22.118 03:16:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.118 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:22.118 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:22.118 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:22.118 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.118 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.118 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:22.118 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.119 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.119 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.119 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.119 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.119 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:22.119 03:16:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.119 03:16:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.119 03:16:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.119 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.119 "name": "raid_bdev1", 00:13:22.119 "uuid": "9b5d1191-6a4c-4075-8c33-6be2fbd48322", 00:13:22.119 "strip_size_kb": 0, 00:13:22.119 "state": "online", 00:13:22.119 "raid_level": "raid1", 00:13:22.119 "superblock": true, 00:13:22.119 "num_base_bdevs": 2, 00:13:22.119 "num_base_bdevs_discovered": 2, 00:13:22.119 "num_base_bdevs_operational": 2, 00:13:22.119 "base_bdevs_list": [ 00:13:22.119 { 00:13:22.119 "name": "BaseBdev1", 00:13:22.119 "uuid": "19a805c9-6288-56f6-81e6-9fd406204894", 00:13:22.119 "is_configured": true, 00:13:22.119 "data_offset": 2048, 00:13:22.119 "data_size": 63488 00:13:22.119 }, 00:13:22.119 { 00:13:22.119 "name": "BaseBdev2", 00:13:22.119 "uuid": "108b2015-5ab2-5275-8cb7-6ccc7efd44b2", 00:13:22.119 "is_configured": true, 00:13:22.119 "data_offset": 2048, 00:13:22.119 "data_size": 63488 00:13:22.119 } 00:13:22.119 ] 00:13:22.119 }' 00:13:22.119 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.119 03:16:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.688 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:22.688 03:16:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.688 03:16:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.688 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:22.688 [2024-10-09 03:16:05.768250] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:22.688 03:16:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.688 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:22.688 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.688 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:22.689 03:16:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:22.689 03:16:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.689 03:16:05 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:22.689 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:22.689 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:13:22.689 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:13:22.689 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:13:22.689 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:13:22.689 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:22.689 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:13:22.689 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:22.689 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:22.689 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:22.689 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:22.689 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:22.689 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:22.689 03:16:05 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:13:22.950 [2024-10-09 03:16:06.043513] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:22.950 /dev/nbd0 00:13:22.950 03:16:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:22.950 03:16:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:22.950 03:16:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:22.950 03:16:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:22.950 03:16:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:22.950 03:16:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:22.950 03:16:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:22.950 03:16:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:22.950 03:16:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:22.950 03:16:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:22.950 03:16:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:22.950 1+0 records in 00:13:22.950 1+0 records out 00:13:22.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000382259 s, 10.7 MB/s 00:13:22.950 03:16:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.950 03:16:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:22.950 03:16:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:22.950 03:16:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:22.950 03:16:06 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:22.950 03:16:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:22.950 03:16:06 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:22.950 03:16:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:13:22.950 03:16:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:13:22.950 03:16:06 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:13:27.141 63488+0 records in 00:13:27.141 63488+0 records out 00:13:27.141 32505856 bytes (33 MB, 31 MiB) copied, 4.21644 s, 7.7 MB/s 00:13:27.141 03:16:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:27.141 03:16:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:27.141 03:16:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:27.141 03:16:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:27.141 03:16:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:27.141 03:16:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:27.141 03:16:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:27.400 [2024-10-09 03:16:10.567738] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:27.400 03:16:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:27.400 03:16:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:27.400 03:16:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:27.400 03:16:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:27.400 03:16:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:27.400 03:16:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:27.400 03:16:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:27.400 03:16:10 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:27.400 03:16:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:27.400 03:16:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.400 03:16:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.400 [2024-10-09 03:16:10.603832] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:27.400 03:16:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.400 03:16:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:27.400 03:16:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:27.400 03:16:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:27.400 03:16:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.400 03:16:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.400 03:16:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:27.400 03:16:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.400 03:16:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.400 03:16:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.400 03:16:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.400 03:16:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.400 03:16:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.400 03:16:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.400 03:16:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:27.400 03:16:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.400 03:16:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.400 "name": "raid_bdev1", 00:13:27.400 "uuid": "9b5d1191-6a4c-4075-8c33-6be2fbd48322", 00:13:27.400 "strip_size_kb": 0, 00:13:27.400 "state": "online", 00:13:27.400 "raid_level": "raid1", 00:13:27.400 "superblock": true, 00:13:27.400 "num_base_bdevs": 2, 00:13:27.400 "num_base_bdevs_discovered": 1, 00:13:27.400 "num_base_bdevs_operational": 1, 00:13:27.400 "base_bdevs_list": [ 00:13:27.400 { 00:13:27.400 "name": null, 00:13:27.400 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.400 "is_configured": false, 00:13:27.400 "data_offset": 0, 00:13:27.400 "data_size": 63488 00:13:27.400 }, 00:13:27.401 { 00:13:27.401 "name": "BaseBdev2", 00:13:27.401 "uuid": "108b2015-5ab2-5275-8cb7-6ccc7efd44b2", 00:13:27.401 "is_configured": true, 00:13:27.401 "data_offset": 2048, 00:13:27.401 "data_size": 63488 00:13:27.401 } 00:13:27.401 ] 00:13:27.401 }' 00:13:27.401 03:16:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.401 03:16:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.972 03:16:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:27.972 03:16:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:27.972 03:16:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:27.972 [2024-10-09 03:16:11.031083] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:27.972 [2024-10-09 03:16:11.047230] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:13:27.972 03:16:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:27.972 03:16:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:27.972 [2024-10-09 03:16:11.049381] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:28.939 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:28.939 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:28.939 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:28.939 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:28.939 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:28.939 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.939 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:28.939 03:16:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.939 03:16:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.939 03:16:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:28.939 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:28.939 "name": "raid_bdev1", 00:13:28.939 "uuid": "9b5d1191-6a4c-4075-8c33-6be2fbd48322", 00:13:28.939 "strip_size_kb": 0, 00:13:28.939 "state": "online", 00:13:28.939 "raid_level": "raid1", 00:13:28.939 "superblock": true, 00:13:28.939 "num_base_bdevs": 2, 00:13:28.939 "num_base_bdevs_discovered": 2, 00:13:28.939 "num_base_bdevs_operational": 2, 00:13:28.939 "process": { 00:13:28.939 "type": "rebuild", 00:13:28.939 "target": "spare", 00:13:28.939 "progress": { 00:13:28.939 "blocks": 20480, 00:13:28.939 "percent": 32 00:13:28.939 } 00:13:28.939 }, 00:13:28.939 "base_bdevs_list": [ 00:13:28.939 { 00:13:28.939 "name": "spare", 00:13:28.939 "uuid": "16088194-0db2-5de1-80a2-3d337891913e", 00:13:28.939 "is_configured": true, 00:13:28.939 "data_offset": 2048, 00:13:28.939 "data_size": 63488 00:13:28.939 }, 00:13:28.939 { 00:13:28.939 "name": "BaseBdev2", 00:13:28.939 "uuid": "108b2015-5ab2-5275-8cb7-6ccc7efd44b2", 00:13:28.939 "is_configured": true, 00:13:28.939 "data_offset": 2048, 00:13:28.939 "data_size": 63488 00:13:28.939 } 00:13:28.939 ] 00:13:28.939 }' 00:13:28.939 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:28.939 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:28.939 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:28.939 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:28.939 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:28.939 03:16:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:28.939 03:16:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:28.939 [2024-10-09 03:16:12.185545] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:29.199 [2024-10-09 03:16:12.259300] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:29.199 [2024-10-09 03:16:12.259385] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.199 [2024-10-09 03:16:12.259402] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:29.199 [2024-10-09 03:16:12.259413] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:29.199 03:16:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.199 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:29.199 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:29.199 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.199 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.199 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.199 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:29.199 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.199 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.199 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.199 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.199 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.199 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.199 03:16:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.199 03:16:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.199 03:16:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.199 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.199 "name": "raid_bdev1", 00:13:29.199 "uuid": "9b5d1191-6a4c-4075-8c33-6be2fbd48322", 00:13:29.199 "strip_size_kb": 0, 00:13:29.199 "state": "online", 00:13:29.199 "raid_level": "raid1", 00:13:29.199 "superblock": true, 00:13:29.199 "num_base_bdevs": 2, 00:13:29.199 "num_base_bdevs_discovered": 1, 00:13:29.199 "num_base_bdevs_operational": 1, 00:13:29.199 "base_bdevs_list": [ 00:13:29.199 { 00:13:29.199 "name": null, 00:13:29.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.199 "is_configured": false, 00:13:29.199 "data_offset": 0, 00:13:29.199 "data_size": 63488 00:13:29.199 }, 00:13:29.199 { 00:13:29.199 "name": "BaseBdev2", 00:13:29.199 "uuid": "108b2015-5ab2-5275-8cb7-6ccc7efd44b2", 00:13:29.199 "is_configured": true, 00:13:29.199 "data_offset": 2048, 00:13:29.199 "data_size": 63488 00:13:29.199 } 00:13:29.199 ] 00:13:29.199 }' 00:13:29.199 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.199 03:16:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.459 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:29.459 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:29.459 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:29.459 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:29.459 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:29.459 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:29.459 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.459 03:16:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.459 03:16:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.459 03:16:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.459 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:29.459 "name": "raid_bdev1", 00:13:29.459 "uuid": "9b5d1191-6a4c-4075-8c33-6be2fbd48322", 00:13:29.459 "strip_size_kb": 0, 00:13:29.459 "state": "online", 00:13:29.459 "raid_level": "raid1", 00:13:29.459 "superblock": true, 00:13:29.459 "num_base_bdevs": 2, 00:13:29.459 "num_base_bdevs_discovered": 1, 00:13:29.459 "num_base_bdevs_operational": 1, 00:13:29.459 "base_bdevs_list": [ 00:13:29.459 { 00:13:29.459 "name": null, 00:13:29.459 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.459 "is_configured": false, 00:13:29.459 "data_offset": 0, 00:13:29.459 "data_size": 63488 00:13:29.459 }, 00:13:29.459 { 00:13:29.459 "name": "BaseBdev2", 00:13:29.459 "uuid": "108b2015-5ab2-5275-8cb7-6ccc7efd44b2", 00:13:29.459 "is_configured": true, 00:13:29.459 "data_offset": 2048, 00:13:29.459 "data_size": 63488 00:13:29.459 } 00:13:29.459 ] 00:13:29.459 }' 00:13:29.459 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:29.719 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:29.719 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:29.719 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:29.719 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:29.719 03:16:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.719 03:16:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.719 [2024-10-09 03:16:12.803008] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:29.719 [2024-10-09 03:16:12.818440] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:13:29.719 03:16:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.719 03:16:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:29.720 [2024-10-09 03:16:12.820583] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:30.658 03:16:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:30.658 03:16:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.658 03:16:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:30.658 03:16:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:30.658 03:16:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.658 03:16:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.658 03:16:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.658 03:16:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.658 03:16:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.658 03:16:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.658 03:16:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.658 "name": "raid_bdev1", 00:13:30.658 "uuid": "9b5d1191-6a4c-4075-8c33-6be2fbd48322", 00:13:30.658 "strip_size_kb": 0, 00:13:30.658 "state": "online", 00:13:30.658 "raid_level": "raid1", 00:13:30.658 "superblock": true, 00:13:30.658 "num_base_bdevs": 2, 00:13:30.658 "num_base_bdevs_discovered": 2, 00:13:30.658 "num_base_bdevs_operational": 2, 00:13:30.658 "process": { 00:13:30.658 "type": "rebuild", 00:13:30.658 "target": "spare", 00:13:30.658 "progress": { 00:13:30.658 "blocks": 20480, 00:13:30.658 "percent": 32 00:13:30.658 } 00:13:30.658 }, 00:13:30.658 "base_bdevs_list": [ 00:13:30.658 { 00:13:30.658 "name": "spare", 00:13:30.658 "uuid": "16088194-0db2-5de1-80a2-3d337891913e", 00:13:30.658 "is_configured": true, 00:13:30.658 "data_offset": 2048, 00:13:30.658 "data_size": 63488 00:13:30.658 }, 00:13:30.658 { 00:13:30.658 "name": "BaseBdev2", 00:13:30.658 "uuid": "108b2015-5ab2-5275-8cb7-6ccc7efd44b2", 00:13:30.658 "is_configured": true, 00:13:30.658 "data_offset": 2048, 00:13:30.658 "data_size": 63488 00:13:30.658 } 00:13:30.658 ] 00:13:30.658 }' 00:13:30.658 03:16:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.658 03:16:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:30.658 03:16:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.658 03:16:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:30.658 03:16:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:13:30.658 03:16:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:13:30.658 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:13:30.658 03:16:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:30.658 03:16:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:30.658 03:16:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:30.658 03:16:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=400 00:13:30.658 03:16:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:30.658 03:16:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:30.658 03:16:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:30.658 03:16:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:30.658 03:16:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:30.658 03:16:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:30.658 03:16:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:30.658 03:16:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:30.658 03:16:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.658 03:16:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.918 03:16:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.918 03:16:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:30.918 "name": "raid_bdev1", 00:13:30.918 "uuid": "9b5d1191-6a4c-4075-8c33-6be2fbd48322", 00:13:30.918 "strip_size_kb": 0, 00:13:30.918 "state": "online", 00:13:30.918 "raid_level": "raid1", 00:13:30.918 "superblock": true, 00:13:30.918 "num_base_bdevs": 2, 00:13:30.918 "num_base_bdevs_discovered": 2, 00:13:30.918 "num_base_bdevs_operational": 2, 00:13:30.918 "process": { 00:13:30.918 "type": "rebuild", 00:13:30.918 "target": "spare", 00:13:30.918 "progress": { 00:13:30.918 "blocks": 22528, 00:13:30.918 "percent": 35 00:13:30.918 } 00:13:30.918 }, 00:13:30.918 "base_bdevs_list": [ 00:13:30.918 { 00:13:30.918 "name": "spare", 00:13:30.918 "uuid": "16088194-0db2-5de1-80a2-3d337891913e", 00:13:30.918 "is_configured": true, 00:13:30.918 "data_offset": 2048, 00:13:30.918 "data_size": 63488 00:13:30.918 }, 00:13:30.918 { 00:13:30.918 "name": "BaseBdev2", 00:13:30.918 "uuid": "108b2015-5ab2-5275-8cb7-6ccc7efd44b2", 00:13:30.918 "is_configured": true, 00:13:30.918 "data_offset": 2048, 00:13:30.918 "data_size": 63488 00:13:30.918 } 00:13:30.918 ] 00:13:30.918 }' 00:13:30.918 03:16:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:30.918 03:16:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:30.918 03:16:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:30.918 03:16:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:30.918 03:16:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:31.856 03:16:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:31.856 03:16:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:31.856 03:16:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:31.856 03:16:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:31.856 03:16:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:31.856 03:16:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:31.856 03:16:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:31.856 03:16:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:31.856 03:16:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:31.856 03:16:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.856 03:16:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:31.856 03:16:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:31.856 "name": "raid_bdev1", 00:13:31.856 "uuid": "9b5d1191-6a4c-4075-8c33-6be2fbd48322", 00:13:31.856 "strip_size_kb": 0, 00:13:31.856 "state": "online", 00:13:31.856 "raid_level": "raid1", 00:13:31.856 "superblock": true, 00:13:31.856 "num_base_bdevs": 2, 00:13:31.856 "num_base_bdevs_discovered": 2, 00:13:31.856 "num_base_bdevs_operational": 2, 00:13:31.856 "process": { 00:13:31.856 "type": "rebuild", 00:13:31.856 "target": "spare", 00:13:31.856 "progress": { 00:13:31.856 "blocks": 45056, 00:13:31.856 "percent": 70 00:13:31.856 } 00:13:31.856 }, 00:13:31.856 "base_bdevs_list": [ 00:13:31.856 { 00:13:31.856 "name": "spare", 00:13:31.856 "uuid": "16088194-0db2-5de1-80a2-3d337891913e", 00:13:31.856 "is_configured": true, 00:13:31.856 "data_offset": 2048, 00:13:31.856 "data_size": 63488 00:13:31.856 }, 00:13:31.856 { 00:13:31.856 "name": "BaseBdev2", 00:13:31.856 "uuid": "108b2015-5ab2-5275-8cb7-6ccc7efd44b2", 00:13:31.856 "is_configured": true, 00:13:31.856 "data_offset": 2048, 00:13:31.856 "data_size": 63488 00:13:31.856 } 00:13:31.856 ] 00:13:31.856 }' 00:13:31.856 03:16:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:32.116 03:16:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:32.116 03:16:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:32.116 03:16:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:32.116 03:16:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:32.695 [2024-10-09 03:16:15.944836] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:32.695 [2024-10-09 03:16:15.944950] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:32.695 [2024-10-09 03:16:15.945064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:32.960 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:32.960 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:32.960 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:32.960 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:32.960 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:32.960 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:32.960 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:32.960 03:16:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.960 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:32.960 03:16:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:32.960 03:16:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.219 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.219 "name": "raid_bdev1", 00:13:33.219 "uuid": "9b5d1191-6a4c-4075-8c33-6be2fbd48322", 00:13:33.219 "strip_size_kb": 0, 00:13:33.219 "state": "online", 00:13:33.219 "raid_level": "raid1", 00:13:33.219 "superblock": true, 00:13:33.219 "num_base_bdevs": 2, 00:13:33.219 "num_base_bdevs_discovered": 2, 00:13:33.219 "num_base_bdevs_operational": 2, 00:13:33.219 "base_bdevs_list": [ 00:13:33.219 { 00:13:33.219 "name": "spare", 00:13:33.219 "uuid": "16088194-0db2-5de1-80a2-3d337891913e", 00:13:33.219 "is_configured": true, 00:13:33.219 "data_offset": 2048, 00:13:33.219 "data_size": 63488 00:13:33.219 }, 00:13:33.219 { 00:13:33.219 "name": "BaseBdev2", 00:13:33.219 "uuid": "108b2015-5ab2-5275-8cb7-6ccc7efd44b2", 00:13:33.219 "is_configured": true, 00:13:33.220 "data_offset": 2048, 00:13:33.220 "data_size": 63488 00:13:33.220 } 00:13:33.220 ] 00:13:33.220 }' 00:13:33.220 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.220 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:33.220 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.220 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:33.220 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:13:33.220 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:33.220 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:33.220 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:33.220 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:33.220 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:33.220 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.220 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.220 03:16:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.220 03:16:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.220 03:16:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.220 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:33.220 "name": "raid_bdev1", 00:13:33.220 "uuid": "9b5d1191-6a4c-4075-8c33-6be2fbd48322", 00:13:33.220 "strip_size_kb": 0, 00:13:33.220 "state": "online", 00:13:33.220 "raid_level": "raid1", 00:13:33.220 "superblock": true, 00:13:33.220 "num_base_bdevs": 2, 00:13:33.220 "num_base_bdevs_discovered": 2, 00:13:33.220 "num_base_bdevs_operational": 2, 00:13:33.220 "base_bdevs_list": [ 00:13:33.220 { 00:13:33.220 "name": "spare", 00:13:33.220 "uuid": "16088194-0db2-5de1-80a2-3d337891913e", 00:13:33.220 "is_configured": true, 00:13:33.220 "data_offset": 2048, 00:13:33.220 "data_size": 63488 00:13:33.220 }, 00:13:33.220 { 00:13:33.220 "name": "BaseBdev2", 00:13:33.220 "uuid": "108b2015-5ab2-5275-8cb7-6ccc7efd44b2", 00:13:33.220 "is_configured": true, 00:13:33.220 "data_offset": 2048, 00:13:33.220 "data_size": 63488 00:13:33.220 } 00:13:33.220 ] 00:13:33.220 }' 00:13:33.220 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:33.220 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:33.220 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:33.479 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:33.479 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:33.479 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:33.479 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:33.479 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.479 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.479 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:33.479 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.479 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.479 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.479 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.480 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.480 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:33.480 03:16:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.480 03:16:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.480 03:16:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.480 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.480 "name": "raid_bdev1", 00:13:33.480 "uuid": "9b5d1191-6a4c-4075-8c33-6be2fbd48322", 00:13:33.480 "strip_size_kb": 0, 00:13:33.480 "state": "online", 00:13:33.480 "raid_level": "raid1", 00:13:33.480 "superblock": true, 00:13:33.480 "num_base_bdevs": 2, 00:13:33.480 "num_base_bdevs_discovered": 2, 00:13:33.480 "num_base_bdevs_operational": 2, 00:13:33.480 "base_bdevs_list": [ 00:13:33.480 { 00:13:33.480 "name": "spare", 00:13:33.480 "uuid": "16088194-0db2-5de1-80a2-3d337891913e", 00:13:33.480 "is_configured": true, 00:13:33.480 "data_offset": 2048, 00:13:33.480 "data_size": 63488 00:13:33.480 }, 00:13:33.480 { 00:13:33.480 "name": "BaseBdev2", 00:13:33.480 "uuid": "108b2015-5ab2-5275-8cb7-6ccc7efd44b2", 00:13:33.480 "is_configured": true, 00:13:33.480 "data_offset": 2048, 00:13:33.480 "data_size": 63488 00:13:33.480 } 00:13:33.480 ] 00:13:33.480 }' 00:13:33.480 03:16:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.480 03:16:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.739 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:33.739 03:16:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.739 03:16:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.739 [2024-10-09 03:16:17.033271] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:33.739 [2024-10-09 03:16:17.033405] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:33.739 [2024-10-09 03:16:17.033530] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:33.739 [2024-10-09 03:16:17.033622] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:33.739 [2024-10-09 03:16:17.033667] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:33.739 03:16:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.999 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:13:33.999 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.999 03:16:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.999 03:16:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.999 03:16:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.999 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:33.999 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:33.999 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:13:33.999 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:13:33.999 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:33.999 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:13:33.999 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:33.999 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:33.999 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:33.999 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:13:33.999 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:33.999 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:33.999 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:13:33.999 /dev/nbd0 00:13:33.999 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:33.999 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:33.999 03:16:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:33.999 03:16:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:34.259 03:16:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:34.260 03:16:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:34.260 03:16:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:34.260 03:16:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:34.260 03:16:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:34.260 03:16:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:34.260 03:16:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.260 1+0 records in 00:13:34.260 1+0 records out 00:13:34.260 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000422073 s, 9.7 MB/s 00:13:34.260 03:16:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.260 03:16:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:34.260 03:16:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.260 03:16:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:34.260 03:16:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:34.260 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:34.260 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:34.260 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:13:34.260 /dev/nbd1 00:13:34.260 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:34.260 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:34.260 03:16:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:34.260 03:16:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:13:34.260 03:16:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:34.260 03:16:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:34.260 03:16:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:34.260 03:16:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:13:34.260 03:16:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:34.260 03:16:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:34.260 03:16:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:34.260 1+0 records in 00:13:34.260 1+0 records out 00:13:34.260 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031884 s, 12.8 MB/s 00:13:34.260 03:16:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.260 03:16:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:13:34.260 03:16:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:34.260 03:16:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:34.260 03:16:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:13:34.260 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:34.260 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:13:34.260 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:13:34.520 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:13:34.520 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:34.520 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:13:34.520 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:34.520 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:13:34.520 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:34.520 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:34.779 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:34.779 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:34.779 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:34.779 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:34.779 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:34.779 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:34.779 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:34.779 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:34.779 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:34.779 03:16:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:35.039 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:35.039 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:35.039 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:35.039 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:35.039 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:35.039 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:35.039 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:13:35.039 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:13:35.039 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:13:35.039 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:13:35.039 03:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.039 03:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.039 03:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.039 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:35.039 03:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.039 03:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.039 [2024-10-09 03:16:18.169470] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:35.039 [2024-10-09 03:16:18.169553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:35.039 [2024-10-09 03:16:18.169579] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:35.039 [2024-10-09 03:16:18.169589] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:35.039 [2024-10-09 03:16:18.172121] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:35.039 [2024-10-09 03:16:18.172153] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:35.039 [2024-10-09 03:16:18.172247] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:35.039 [2024-10-09 03:16:18.172309] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:35.039 [2024-10-09 03:16:18.172460] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:35.039 spare 00:13:35.039 03:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.039 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:13:35.039 03:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.039 03:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.039 [2024-10-09 03:16:18.272368] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:13:35.039 [2024-10-09 03:16:18.272403] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:35.039 [2024-10-09 03:16:18.272724] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:13:35.039 [2024-10-09 03:16:18.272969] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:13:35.039 [2024-10-09 03:16:18.272986] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:13:35.039 [2024-10-09 03:16:18.273173] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.039 03:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.039 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:35.039 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.039 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.039 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.039 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.039 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:35.040 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.040 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.040 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.040 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.040 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.040 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.040 03:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.040 03:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.040 03:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.040 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.040 "name": "raid_bdev1", 00:13:35.040 "uuid": "9b5d1191-6a4c-4075-8c33-6be2fbd48322", 00:13:35.040 "strip_size_kb": 0, 00:13:35.040 "state": "online", 00:13:35.040 "raid_level": "raid1", 00:13:35.040 "superblock": true, 00:13:35.040 "num_base_bdevs": 2, 00:13:35.040 "num_base_bdevs_discovered": 2, 00:13:35.040 "num_base_bdevs_operational": 2, 00:13:35.040 "base_bdevs_list": [ 00:13:35.040 { 00:13:35.040 "name": "spare", 00:13:35.040 "uuid": "16088194-0db2-5de1-80a2-3d337891913e", 00:13:35.040 "is_configured": true, 00:13:35.040 "data_offset": 2048, 00:13:35.040 "data_size": 63488 00:13:35.040 }, 00:13:35.040 { 00:13:35.040 "name": "BaseBdev2", 00:13:35.040 "uuid": "108b2015-5ab2-5275-8cb7-6ccc7efd44b2", 00:13:35.040 "is_configured": true, 00:13:35.040 "data_offset": 2048, 00:13:35.040 "data_size": 63488 00:13:35.040 } 00:13:35.040 ] 00:13:35.040 }' 00:13:35.040 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.040 03:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:35.609 "name": "raid_bdev1", 00:13:35.609 "uuid": "9b5d1191-6a4c-4075-8c33-6be2fbd48322", 00:13:35.609 "strip_size_kb": 0, 00:13:35.609 "state": "online", 00:13:35.609 "raid_level": "raid1", 00:13:35.609 "superblock": true, 00:13:35.609 "num_base_bdevs": 2, 00:13:35.609 "num_base_bdevs_discovered": 2, 00:13:35.609 "num_base_bdevs_operational": 2, 00:13:35.609 "base_bdevs_list": [ 00:13:35.609 { 00:13:35.609 "name": "spare", 00:13:35.609 "uuid": "16088194-0db2-5de1-80a2-3d337891913e", 00:13:35.609 "is_configured": true, 00:13:35.609 "data_offset": 2048, 00:13:35.609 "data_size": 63488 00:13:35.609 }, 00:13:35.609 { 00:13:35.609 "name": "BaseBdev2", 00:13:35.609 "uuid": "108b2015-5ab2-5275-8cb7-6ccc7efd44b2", 00:13:35.609 "is_configured": true, 00:13:35.609 "data_offset": 2048, 00:13:35.609 "data_size": 63488 00:13:35.609 } 00:13:35.609 ] 00:13:35.609 }' 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.609 [2024-10-09 03:16:18.832552] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.609 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.609 "name": "raid_bdev1", 00:13:35.609 "uuid": "9b5d1191-6a4c-4075-8c33-6be2fbd48322", 00:13:35.609 "strip_size_kb": 0, 00:13:35.610 "state": "online", 00:13:35.610 "raid_level": "raid1", 00:13:35.610 "superblock": true, 00:13:35.610 "num_base_bdevs": 2, 00:13:35.610 "num_base_bdevs_discovered": 1, 00:13:35.610 "num_base_bdevs_operational": 1, 00:13:35.610 "base_bdevs_list": [ 00:13:35.610 { 00:13:35.610 "name": null, 00:13:35.610 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.610 "is_configured": false, 00:13:35.610 "data_offset": 0, 00:13:35.610 "data_size": 63488 00:13:35.610 }, 00:13:35.610 { 00:13:35.610 "name": "BaseBdev2", 00:13:35.610 "uuid": "108b2015-5ab2-5275-8cb7-6ccc7efd44b2", 00:13:35.610 "is_configured": true, 00:13:35.610 "data_offset": 2048, 00:13:35.610 "data_size": 63488 00:13:35.610 } 00:13:35.610 ] 00:13:35.610 }' 00:13:35.610 03:16:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.610 03:16:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.186 03:16:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:36.186 03:16:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.186 03:16:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.186 [2024-10-09 03:16:19.311790] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:36.186 [2024-10-09 03:16:19.312048] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:36.186 [2024-10-09 03:16:19.312073] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:36.186 [2024-10-09 03:16:19.312115] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:36.186 [2024-10-09 03:16:19.327719] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:13:36.186 03:16:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.186 03:16:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:13:36.186 [2024-10-09 03:16:19.329917] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:37.151 03:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:37.151 03:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:37.151 03:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:37.151 03:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:37.151 03:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:37.151 03:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.151 03:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.151 03:16:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.151 03:16:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.151 03:16:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.151 03:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:37.151 "name": "raid_bdev1", 00:13:37.151 "uuid": "9b5d1191-6a4c-4075-8c33-6be2fbd48322", 00:13:37.151 "strip_size_kb": 0, 00:13:37.151 "state": "online", 00:13:37.151 "raid_level": "raid1", 00:13:37.151 "superblock": true, 00:13:37.151 "num_base_bdevs": 2, 00:13:37.151 "num_base_bdevs_discovered": 2, 00:13:37.151 "num_base_bdevs_operational": 2, 00:13:37.151 "process": { 00:13:37.151 "type": "rebuild", 00:13:37.151 "target": "spare", 00:13:37.151 "progress": { 00:13:37.151 "blocks": 20480, 00:13:37.151 "percent": 32 00:13:37.151 } 00:13:37.151 }, 00:13:37.151 "base_bdevs_list": [ 00:13:37.151 { 00:13:37.151 "name": "spare", 00:13:37.151 "uuid": "16088194-0db2-5de1-80a2-3d337891913e", 00:13:37.151 "is_configured": true, 00:13:37.151 "data_offset": 2048, 00:13:37.151 "data_size": 63488 00:13:37.151 }, 00:13:37.151 { 00:13:37.151 "name": "BaseBdev2", 00:13:37.151 "uuid": "108b2015-5ab2-5275-8cb7-6ccc7efd44b2", 00:13:37.151 "is_configured": true, 00:13:37.151 "data_offset": 2048, 00:13:37.151 "data_size": 63488 00:13:37.151 } 00:13:37.151 ] 00:13:37.151 }' 00:13:37.151 03:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:37.151 03:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:37.151 03:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:37.410 03:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:37.410 03:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:13:37.410 03:16:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.410 03:16:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.410 [2024-10-09 03:16:20.473023] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:37.410 [2024-10-09 03:16:20.538467] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:37.410 [2024-10-09 03:16:20.538557] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:37.410 [2024-10-09 03:16:20.538572] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:37.410 [2024-10-09 03:16:20.538583] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:37.410 03:16:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.410 03:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:37.411 03:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:37.411 03:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:37.411 03:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:37.411 03:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:37.411 03:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:37.411 03:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:37.411 03:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:37.411 03:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:37.411 03:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:37.411 03:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.411 03:16:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.411 03:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:37.411 03:16:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.411 03:16:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.411 03:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:37.411 "name": "raid_bdev1", 00:13:37.411 "uuid": "9b5d1191-6a4c-4075-8c33-6be2fbd48322", 00:13:37.411 "strip_size_kb": 0, 00:13:37.411 "state": "online", 00:13:37.411 "raid_level": "raid1", 00:13:37.411 "superblock": true, 00:13:37.411 "num_base_bdevs": 2, 00:13:37.411 "num_base_bdevs_discovered": 1, 00:13:37.411 "num_base_bdevs_operational": 1, 00:13:37.411 "base_bdevs_list": [ 00:13:37.411 { 00:13:37.411 "name": null, 00:13:37.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:37.411 "is_configured": false, 00:13:37.411 "data_offset": 0, 00:13:37.411 "data_size": 63488 00:13:37.411 }, 00:13:37.411 { 00:13:37.411 "name": "BaseBdev2", 00:13:37.411 "uuid": "108b2015-5ab2-5275-8cb7-6ccc7efd44b2", 00:13:37.411 "is_configured": true, 00:13:37.411 "data_offset": 2048, 00:13:37.411 "data_size": 63488 00:13:37.411 } 00:13:37.411 ] 00:13:37.411 }' 00:13:37.411 03:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:37.411 03:16:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.670 03:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:37.670 03:16:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.670 03:16:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.670 [2024-10-09 03:16:20.960056] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:37.670 [2024-10-09 03:16:20.960126] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:37.670 [2024-10-09 03:16:20.960148] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:37.670 [2024-10-09 03:16:20.960160] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:37.670 [2024-10-09 03:16:20.960724] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:37.670 [2024-10-09 03:16:20.960757] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:37.670 [2024-10-09 03:16:20.960877] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:13:37.670 [2024-10-09 03:16:20.960895] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:13:37.670 [2024-10-09 03:16:20.960906] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:13:37.670 [2024-10-09 03:16:20.960944] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:37.929 [2024-10-09 03:16:20.976688] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:13:37.929 spare 00:13:37.929 03:16:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.929 03:16:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:13:37.929 [2024-10-09 03:16:20.978824] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:38.869 03:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:38.869 03:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:38.869 03:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:38.869 03:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:38.869 03:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:38.869 03:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.869 03:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.869 03:16:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:38.869 03:16:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.869 03:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.869 03:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:38.869 "name": "raid_bdev1", 00:13:38.869 "uuid": "9b5d1191-6a4c-4075-8c33-6be2fbd48322", 00:13:38.869 "strip_size_kb": 0, 00:13:38.869 "state": "online", 00:13:38.869 "raid_level": "raid1", 00:13:38.869 "superblock": true, 00:13:38.869 "num_base_bdevs": 2, 00:13:38.869 "num_base_bdevs_discovered": 2, 00:13:38.869 "num_base_bdevs_operational": 2, 00:13:38.869 "process": { 00:13:38.869 "type": "rebuild", 00:13:38.869 "target": "spare", 00:13:38.869 "progress": { 00:13:38.869 "blocks": 20480, 00:13:38.869 "percent": 32 00:13:38.869 } 00:13:38.869 }, 00:13:38.869 "base_bdevs_list": [ 00:13:38.869 { 00:13:38.869 "name": "spare", 00:13:38.869 "uuid": "16088194-0db2-5de1-80a2-3d337891913e", 00:13:38.869 "is_configured": true, 00:13:38.869 "data_offset": 2048, 00:13:38.869 "data_size": 63488 00:13:38.869 }, 00:13:38.869 { 00:13:38.869 "name": "BaseBdev2", 00:13:38.869 "uuid": "108b2015-5ab2-5275-8cb7-6ccc7efd44b2", 00:13:38.869 "is_configured": true, 00:13:38.869 "data_offset": 2048, 00:13:38.869 "data_size": 63488 00:13:38.869 } 00:13:38.869 ] 00:13:38.869 }' 00:13:38.869 03:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:38.869 03:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:38.869 03:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:38.869 03:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:38.869 03:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:13:38.869 03:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.869 03:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.869 [2024-10-09 03:16:22.143616] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:39.128 [2024-10-09 03:16:22.188292] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:39.128 [2024-10-09 03:16:22.188387] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:39.128 [2024-10-09 03:16:22.188406] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:39.128 [2024-10-09 03:16:22.188415] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:39.128 03:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.128 03:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:39.128 03:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:39.128 03:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:39.128 03:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.128 03:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.128 03:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:39.128 03:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.128 03:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.128 03:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.128 03:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.128 03:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.128 03:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.128 03:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.128 03:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.128 03:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.128 03:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.128 "name": "raid_bdev1", 00:13:39.128 "uuid": "9b5d1191-6a4c-4075-8c33-6be2fbd48322", 00:13:39.128 "strip_size_kb": 0, 00:13:39.128 "state": "online", 00:13:39.128 "raid_level": "raid1", 00:13:39.128 "superblock": true, 00:13:39.128 "num_base_bdevs": 2, 00:13:39.128 "num_base_bdevs_discovered": 1, 00:13:39.128 "num_base_bdevs_operational": 1, 00:13:39.129 "base_bdevs_list": [ 00:13:39.129 { 00:13:39.129 "name": null, 00:13:39.129 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.129 "is_configured": false, 00:13:39.129 "data_offset": 0, 00:13:39.129 "data_size": 63488 00:13:39.129 }, 00:13:39.129 { 00:13:39.129 "name": "BaseBdev2", 00:13:39.129 "uuid": "108b2015-5ab2-5275-8cb7-6ccc7efd44b2", 00:13:39.129 "is_configured": true, 00:13:39.129 "data_offset": 2048, 00:13:39.129 "data_size": 63488 00:13:39.129 } 00:13:39.129 ] 00:13:39.129 }' 00:13:39.129 03:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.129 03:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.388 03:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:39.388 03:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:39.388 03:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:39.388 03:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:39.388 03:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:39.388 03:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:39.388 03:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.388 03:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.388 03:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.388 03:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.388 03:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:39.388 "name": "raid_bdev1", 00:13:39.388 "uuid": "9b5d1191-6a4c-4075-8c33-6be2fbd48322", 00:13:39.388 "strip_size_kb": 0, 00:13:39.388 "state": "online", 00:13:39.388 "raid_level": "raid1", 00:13:39.388 "superblock": true, 00:13:39.388 "num_base_bdevs": 2, 00:13:39.388 "num_base_bdevs_discovered": 1, 00:13:39.388 "num_base_bdevs_operational": 1, 00:13:39.388 "base_bdevs_list": [ 00:13:39.388 { 00:13:39.388 "name": null, 00:13:39.388 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:39.388 "is_configured": false, 00:13:39.388 "data_offset": 0, 00:13:39.388 "data_size": 63488 00:13:39.388 }, 00:13:39.388 { 00:13:39.388 "name": "BaseBdev2", 00:13:39.388 "uuid": "108b2015-5ab2-5275-8cb7-6ccc7efd44b2", 00:13:39.388 "is_configured": true, 00:13:39.388 "data_offset": 2048, 00:13:39.388 "data_size": 63488 00:13:39.388 } 00:13:39.388 ] 00:13:39.388 }' 00:13:39.388 03:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:39.648 03:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:39.648 03:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:39.648 03:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:39.648 03:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:13:39.648 03:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.648 03:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.648 03:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.648 03:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:39.648 03:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.648 03:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.648 [2024-10-09 03:16:22.803050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:39.648 [2024-10-09 03:16:22.803135] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:39.648 [2024-10-09 03:16:22.803162] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:39.648 [2024-10-09 03:16:22.803172] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:39.648 [2024-10-09 03:16:22.803710] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:39.648 [2024-10-09 03:16:22.803736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:39.648 [2024-10-09 03:16:22.803832] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:13:39.648 [2024-10-09 03:16:22.803866] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:39.648 [2024-10-09 03:16:22.803886] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:39.648 [2024-10-09 03:16:22.803898] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:13:39.648 BaseBdev1 00:13:39.648 03:16:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.648 03:16:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:13:40.586 03:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:40.586 03:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:40.586 03:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:40.586 03:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:40.586 03:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:40.586 03:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:40.586 03:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.586 03:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.586 03:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.586 03:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.586 03:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.586 03:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:40.586 03:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.586 03:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.586 03:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.586 03:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.586 "name": "raid_bdev1", 00:13:40.586 "uuid": "9b5d1191-6a4c-4075-8c33-6be2fbd48322", 00:13:40.586 "strip_size_kb": 0, 00:13:40.586 "state": "online", 00:13:40.586 "raid_level": "raid1", 00:13:40.586 "superblock": true, 00:13:40.586 "num_base_bdevs": 2, 00:13:40.586 "num_base_bdevs_discovered": 1, 00:13:40.586 "num_base_bdevs_operational": 1, 00:13:40.586 "base_bdevs_list": [ 00:13:40.586 { 00:13:40.586 "name": null, 00:13:40.586 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:40.586 "is_configured": false, 00:13:40.586 "data_offset": 0, 00:13:40.586 "data_size": 63488 00:13:40.586 }, 00:13:40.586 { 00:13:40.586 "name": "BaseBdev2", 00:13:40.586 "uuid": "108b2015-5ab2-5275-8cb7-6ccc7efd44b2", 00:13:40.586 "is_configured": true, 00:13:40.586 "data_offset": 2048, 00:13:40.586 "data_size": 63488 00:13:40.586 } 00:13:40.586 ] 00:13:40.586 }' 00:13:40.586 03:16:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.586 03:16:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.155 03:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:41.155 03:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:41.155 03:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:41.155 03:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:41.155 03:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:41.155 03:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.155 03:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.155 03:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:41.155 03:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.155 03:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.155 03:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:41.155 "name": "raid_bdev1", 00:13:41.155 "uuid": "9b5d1191-6a4c-4075-8c33-6be2fbd48322", 00:13:41.155 "strip_size_kb": 0, 00:13:41.155 "state": "online", 00:13:41.155 "raid_level": "raid1", 00:13:41.155 "superblock": true, 00:13:41.155 "num_base_bdevs": 2, 00:13:41.155 "num_base_bdevs_discovered": 1, 00:13:41.155 "num_base_bdevs_operational": 1, 00:13:41.155 "base_bdevs_list": [ 00:13:41.155 { 00:13:41.155 "name": null, 00:13:41.155 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:41.155 "is_configured": false, 00:13:41.155 "data_offset": 0, 00:13:41.155 "data_size": 63488 00:13:41.155 }, 00:13:41.155 { 00:13:41.155 "name": "BaseBdev2", 00:13:41.155 "uuid": "108b2015-5ab2-5275-8cb7-6ccc7efd44b2", 00:13:41.155 "is_configured": true, 00:13:41.155 "data_offset": 2048, 00:13:41.155 "data_size": 63488 00:13:41.155 } 00:13:41.155 ] 00:13:41.155 }' 00:13:41.155 03:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:41.155 03:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:41.155 03:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:41.155 03:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:41.155 03:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:41.155 03:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:13:41.155 03:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:41.155 03:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:13:41.155 03:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:41.155 03:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:13:41.155 03:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:13:41.155 03:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:13:41.155 03:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.155 03:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.155 [2024-10-09 03:16:24.372746] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:41.155 [2024-10-09 03:16:24.372990] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:13:41.155 [2024-10-09 03:16:24.373009] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:13:41.155 request: 00:13:41.155 { 00:13:41.155 "base_bdev": "BaseBdev1", 00:13:41.155 "raid_bdev": "raid_bdev1", 00:13:41.155 "method": "bdev_raid_add_base_bdev", 00:13:41.155 "req_id": 1 00:13:41.155 } 00:13:41.155 Got JSON-RPC error response 00:13:41.155 response: 00:13:41.155 { 00:13:41.155 "code": -22, 00:13:41.155 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:13:41.155 } 00:13:41.155 03:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:13:41.155 03:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:13:41.155 03:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:13:41.155 03:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:13:41.155 03:16:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:13:41.155 03:16:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:13:42.092 03:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:42.092 03:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:42.092 03:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.092 03:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.092 03:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.093 03:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:42.093 03:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.093 03:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.093 03:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.093 03:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.093 03:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.093 03:16:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.093 03:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.093 03:16:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.352 03:16:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.352 03:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.352 "name": "raid_bdev1", 00:13:42.352 "uuid": "9b5d1191-6a4c-4075-8c33-6be2fbd48322", 00:13:42.352 "strip_size_kb": 0, 00:13:42.352 "state": "online", 00:13:42.352 "raid_level": "raid1", 00:13:42.352 "superblock": true, 00:13:42.352 "num_base_bdevs": 2, 00:13:42.352 "num_base_bdevs_discovered": 1, 00:13:42.352 "num_base_bdevs_operational": 1, 00:13:42.352 "base_bdevs_list": [ 00:13:42.352 { 00:13:42.352 "name": null, 00:13:42.352 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.352 "is_configured": false, 00:13:42.352 "data_offset": 0, 00:13:42.352 "data_size": 63488 00:13:42.352 }, 00:13:42.352 { 00:13:42.352 "name": "BaseBdev2", 00:13:42.352 "uuid": "108b2015-5ab2-5275-8cb7-6ccc7efd44b2", 00:13:42.352 "is_configured": true, 00:13:42.352 "data_offset": 2048, 00:13:42.352 "data_size": 63488 00:13:42.352 } 00:13:42.352 ] 00:13:42.352 }' 00:13:42.352 03:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.352 03:16:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.611 03:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:42.611 03:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:42.611 03:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:42.611 03:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:42.611 03:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:42.611 03:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.611 03:16:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.611 03:16:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.611 03:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:42.611 03:16:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.611 03:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:42.611 "name": "raid_bdev1", 00:13:42.611 "uuid": "9b5d1191-6a4c-4075-8c33-6be2fbd48322", 00:13:42.611 "strip_size_kb": 0, 00:13:42.611 "state": "online", 00:13:42.611 "raid_level": "raid1", 00:13:42.611 "superblock": true, 00:13:42.611 "num_base_bdevs": 2, 00:13:42.611 "num_base_bdevs_discovered": 1, 00:13:42.611 "num_base_bdevs_operational": 1, 00:13:42.611 "base_bdevs_list": [ 00:13:42.611 { 00:13:42.611 "name": null, 00:13:42.611 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:42.611 "is_configured": false, 00:13:42.611 "data_offset": 0, 00:13:42.611 "data_size": 63488 00:13:42.611 }, 00:13:42.611 { 00:13:42.611 "name": "BaseBdev2", 00:13:42.611 "uuid": "108b2015-5ab2-5275-8cb7-6ccc7efd44b2", 00:13:42.611 "is_configured": true, 00:13:42.611 "data_offset": 2048, 00:13:42.611 "data_size": 63488 00:13:42.611 } 00:13:42.611 ] 00:13:42.611 }' 00:13:42.611 03:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:42.869 03:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:42.869 03:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:42.869 03:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:42.869 03:16:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75973 00:13:42.869 03:16:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 75973 ']' 00:13:42.869 03:16:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 75973 00:13:42.869 03:16:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:13:42.869 03:16:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:42.869 03:16:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 75973 00:13:42.869 03:16:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:42.869 03:16:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:42.869 killing process with pid 75973 00:13:42.869 03:16:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 75973' 00:13:42.869 03:16:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 75973 00:13:42.869 Received shutdown signal, test time was about 60.000000 seconds 00:13:42.869 00:13:42.869 Latency(us) 00:13:42.869 [2024-10-09T03:16:26.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:42.869 [2024-10-09T03:16:26.172Z] =================================================================================================================== 00:13:42.869 [2024-10-09T03:16:26.172Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:13:42.869 [2024-10-09 03:16:26.006571] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:42.869 03:16:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 75973 00:13:42.869 [2024-10-09 03:16:26.006757] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:42.869 [2024-10-09 03:16:26.006823] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:42.869 [2024-10-09 03:16:26.006836] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:13:43.127 [2024-10-09 03:16:26.329638] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:44.513 03:16:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:13:44.513 00:13:44.513 real 0m23.467s 00:13:44.513 user 0m28.088s 00:13:44.513 sys 0m3.852s 00:13:44.513 03:16:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:44.513 03:16:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.513 ************************************ 00:13:44.513 END TEST raid_rebuild_test_sb 00:13:44.513 ************************************ 00:13:44.513 03:16:27 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:13:44.513 03:16:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:44.513 03:16:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:44.513 03:16:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:44.513 ************************************ 00:13:44.513 START TEST raid_rebuild_test_io 00:13:44.513 ************************************ 00:13:44.513 03:16:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 false true true 00:13:44.513 03:16:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:44.513 03:16:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:44.513 03:16:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:13:44.513 03:16:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:44.513 03:16:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:44.513 03:16:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:44.513 03:16:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:44.513 03:16:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:44.513 03:16:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:44.513 03:16:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:44.513 03:16:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:44.513 03:16:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:44.513 03:16:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:44.513 03:16:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:44.513 03:16:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:44.513 03:16:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:44.513 03:16:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:44.513 03:16:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:44.513 03:16:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:44.513 03:16:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:44.513 03:16:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:44.513 03:16:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:44.513 03:16:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:13:44.513 03:16:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76707 00:13:44.513 03:16:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:44.513 03:16:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76707 00:13:44.513 03:16:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 76707 ']' 00:13:44.513 03:16:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.513 03:16:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:44.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.513 03:16:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.513 03:16:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:44.513 03:16:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:44.773 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:44.773 Zero copy mechanism will not be used. 00:13:44.773 [2024-10-09 03:16:27.860370] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:13:44.773 [2024-10-09 03:16:27.860484] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76707 ] 00:13:44.773 [2024-10-09 03:16:28.024922] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.032 [2024-10-09 03:16:28.281907] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.291 [2024-10-09 03:16:28.527446] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.291 [2024-10-09 03:16:28.527492] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.550 03:16:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:45.550 03:16:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:13:45.550 03:16:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:45.550 03:16:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:45.550 03:16:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.550 03:16:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.550 BaseBdev1_malloc 00:13:45.550 03:16:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.550 03:16:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:45.550 03:16:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.550 03:16:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.550 [2024-10-09 03:16:28.748271] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:45.550 [2024-10-09 03:16:28.748433] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.550 [2024-10-09 03:16:28.748460] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:45.550 [2024-10-09 03:16:28.748476] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.550 [2024-10-09 03:16:28.750852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.550 [2024-10-09 03:16:28.750888] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:45.550 BaseBdev1 00:13:45.550 03:16:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.550 03:16:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:45.550 03:16:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:45.550 03:16:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.550 03:16:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.550 BaseBdev2_malloc 00:13:45.550 03:16:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.550 03:16:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:45.550 03:16:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.550 03:16:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.550 [2024-10-09 03:16:28.817472] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:45.550 [2024-10-09 03:16:28.817613] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.550 [2024-10-09 03:16:28.817648] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:45.550 [2024-10-09 03:16:28.817684] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.550 [2024-10-09 03:16:28.819959] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.550 [2024-10-09 03:16:28.820030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:45.550 BaseBdev2 00:13:45.550 03:16:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.550 03:16:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:45.550 03:16:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.551 03:16:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.810 spare_malloc 00:13:45.810 03:16:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.810 03:16:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:45.810 03:16:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.810 03:16:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.810 spare_delay 00:13:45.810 03:16:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.810 03:16:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:45.810 03:16:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.810 03:16:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.810 [2024-10-09 03:16:28.889539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:45.810 [2024-10-09 03:16:28.889665] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.810 [2024-10-09 03:16:28.889711] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:45.810 [2024-10-09 03:16:28.889745] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.810 [2024-10-09 03:16:28.892112] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.810 [2024-10-09 03:16:28.892188] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:45.810 spare 00:13:45.810 03:16:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.810 03:16:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:45.810 03:16:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.810 03:16:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.810 [2024-10-09 03:16:28.901565] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:45.810 [2024-10-09 03:16:28.903635] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:45.810 [2024-10-09 03:16:28.903762] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:45.810 [2024-10-09 03:16:28.903804] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:45.810 [2024-10-09 03:16:28.904094] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:45.810 [2024-10-09 03:16:28.904279] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:45.810 [2024-10-09 03:16:28.904319] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:45.810 [2024-10-09 03:16:28.904502] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.810 03:16:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.810 03:16:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:45.810 03:16:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.810 03:16:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.810 03:16:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.810 03:16:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.810 03:16:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:45.810 03:16:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.810 03:16:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.810 03:16:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.810 03:16:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.810 03:16:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.810 03:16:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.810 03:16:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.810 03:16:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:45.810 03:16:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.810 03:16:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:45.810 "name": "raid_bdev1", 00:13:45.810 "uuid": "cce08514-2493-4424-820e-b418b043fa05", 00:13:45.810 "strip_size_kb": 0, 00:13:45.810 "state": "online", 00:13:45.810 "raid_level": "raid1", 00:13:45.810 "superblock": false, 00:13:45.810 "num_base_bdevs": 2, 00:13:45.810 "num_base_bdevs_discovered": 2, 00:13:45.810 "num_base_bdevs_operational": 2, 00:13:45.810 "base_bdevs_list": [ 00:13:45.810 { 00:13:45.810 "name": "BaseBdev1", 00:13:45.810 "uuid": "cffb36aa-9e3a-5b4e-8a40-46d54a4915fe", 00:13:45.810 "is_configured": true, 00:13:45.810 "data_offset": 0, 00:13:45.810 "data_size": 65536 00:13:45.810 }, 00:13:45.810 { 00:13:45.810 "name": "BaseBdev2", 00:13:45.810 "uuid": "a7d8cb7e-5df4-5f36-8cff-f7b6a2a4fc03", 00:13:45.810 "is_configured": true, 00:13:45.810 "data_offset": 0, 00:13:45.810 "data_size": 65536 00:13:45.810 } 00:13:45.810 ] 00:13:45.810 }' 00:13:45.810 03:16:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:45.810 03:16:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.069 03:16:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:46.069 03:16:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.069 03:16:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.069 03:16:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:46.069 [2024-10-09 03:16:29.361236] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:46.069 03:16:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.328 03:16:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:13:46.328 03:16:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.328 03:16:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.328 03:16:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.328 03:16:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:46.328 03:16:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.328 03:16:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:13:46.328 03:16:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:46.328 03:16:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:46.328 03:16:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:46.328 03:16:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.328 03:16:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.328 [2024-10-09 03:16:29.456770] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:46.328 03:16:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.328 03:16:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:46.328 03:16:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:46.328 03:16:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:46.328 03:16:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:46.328 03:16:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:46.328 03:16:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:46.328 03:16:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:46.328 03:16:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:46.328 03:16:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:46.328 03:16:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:46.328 03:16:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.328 03:16:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:46.328 03:16:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.328 03:16:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.328 03:16:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.328 03:16:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.328 "name": "raid_bdev1", 00:13:46.328 "uuid": "cce08514-2493-4424-820e-b418b043fa05", 00:13:46.328 "strip_size_kb": 0, 00:13:46.328 "state": "online", 00:13:46.328 "raid_level": "raid1", 00:13:46.328 "superblock": false, 00:13:46.328 "num_base_bdevs": 2, 00:13:46.328 "num_base_bdevs_discovered": 1, 00:13:46.328 "num_base_bdevs_operational": 1, 00:13:46.328 "base_bdevs_list": [ 00:13:46.328 { 00:13:46.328 "name": null, 00:13:46.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:46.328 "is_configured": false, 00:13:46.328 "data_offset": 0, 00:13:46.328 "data_size": 65536 00:13:46.328 }, 00:13:46.328 { 00:13:46.328 "name": "BaseBdev2", 00:13:46.328 "uuid": "a7d8cb7e-5df4-5f36-8cff-f7b6a2a4fc03", 00:13:46.328 "is_configured": true, 00:13:46.328 "data_offset": 0, 00:13:46.328 "data_size": 65536 00:13:46.328 } 00:13:46.328 ] 00:13:46.328 }' 00:13:46.328 03:16:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.328 03:16:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.328 [2024-10-09 03:16:29.554297] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:46.328 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:46.328 Zero copy mechanism will not be used. 00:13:46.328 Running I/O for 60 seconds... 00:13:46.896 03:16:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:46.896 03:16:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:46.896 03:16:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:46.896 [2024-10-09 03:16:29.900545] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:46.896 03:16:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:46.896 03:16:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:13:46.896 [2024-10-09 03:16:29.954798] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:46.896 [2024-10-09 03:16:29.957009] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:46.896 [2024-10-09 03:16:30.076320] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:46.896 [2024-10-09 03:16:30.077134] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:47.155 [2024-10-09 03:16:30.278947] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:47.155 [2024-10-09 03:16:30.279293] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:47.414 172.00 IOPS, 516.00 MiB/s [2024-10-09T03:16:30.717Z] [2024-10-09 03:16:30.611714] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:47.673 [2024-10-09 03:16:30.756623] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:47.673 [2024-10-09 03:16:30.757208] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:47.673 03:16:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:47.673 03:16:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:47.673 03:16:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:47.673 03:16:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:47.673 03:16:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:47.673 03:16:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.673 03:16:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.673 03:16:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.673 03:16:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.673 03:16:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:47.932 03:16:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:47.932 "name": "raid_bdev1", 00:13:47.932 "uuid": "cce08514-2493-4424-820e-b418b043fa05", 00:13:47.932 "strip_size_kb": 0, 00:13:47.932 "state": "online", 00:13:47.932 "raid_level": "raid1", 00:13:47.932 "superblock": false, 00:13:47.932 "num_base_bdevs": 2, 00:13:47.932 "num_base_bdevs_discovered": 2, 00:13:47.932 "num_base_bdevs_operational": 2, 00:13:47.932 "process": { 00:13:47.932 "type": "rebuild", 00:13:47.932 "target": "spare", 00:13:47.932 "progress": { 00:13:47.932 "blocks": 10240, 00:13:47.932 "percent": 15 00:13:47.932 } 00:13:47.932 }, 00:13:47.932 "base_bdevs_list": [ 00:13:47.932 { 00:13:47.932 "name": "spare", 00:13:47.932 "uuid": "94674048-c7f3-5121-8675-586d7f0e8614", 00:13:47.932 "is_configured": true, 00:13:47.932 "data_offset": 0, 00:13:47.932 "data_size": 65536 00:13:47.932 }, 00:13:47.932 { 00:13:47.932 "name": "BaseBdev2", 00:13:47.932 "uuid": "a7d8cb7e-5df4-5f36-8cff-f7b6a2a4fc03", 00:13:47.932 "is_configured": true, 00:13:47.932 "data_offset": 0, 00:13:47.932 "data_size": 65536 00:13:47.932 } 00:13:47.932 ] 00:13:47.932 }' 00:13:47.932 03:16:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:47.932 03:16:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:47.932 03:16:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:47.932 03:16:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:47.932 03:16:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:13:47.932 03:16:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:47.932 03:16:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:47.932 [2024-10-09 03:16:31.096670] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:47.932 [2024-10-09 03:16:31.202048] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:13:47.932 [2024-10-09 03:16:31.211063] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:47.932 [2024-10-09 03:16:31.211169] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:13:47.932 [2024-10-09 03:16:31.211199] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:13:48.191 [2024-10-09 03:16:31.253503] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:13:48.191 03:16:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.191 03:16:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:48.191 03:16:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.191 03:16:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.191 03:16:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.191 03:16:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.191 03:16:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:48.191 03:16:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.191 03:16:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.191 03:16:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.191 03:16:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.191 03:16:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.191 03:16:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.191 03:16:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.191 03:16:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.191 03:16:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.191 03:16:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.191 "name": "raid_bdev1", 00:13:48.191 "uuid": "cce08514-2493-4424-820e-b418b043fa05", 00:13:48.191 "strip_size_kb": 0, 00:13:48.191 "state": "online", 00:13:48.191 "raid_level": "raid1", 00:13:48.191 "superblock": false, 00:13:48.191 "num_base_bdevs": 2, 00:13:48.191 "num_base_bdevs_discovered": 1, 00:13:48.191 "num_base_bdevs_operational": 1, 00:13:48.191 "base_bdevs_list": [ 00:13:48.191 { 00:13:48.191 "name": null, 00:13:48.191 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.191 "is_configured": false, 00:13:48.191 "data_offset": 0, 00:13:48.191 "data_size": 65536 00:13:48.191 }, 00:13:48.191 { 00:13:48.191 "name": "BaseBdev2", 00:13:48.191 "uuid": "a7d8cb7e-5df4-5f36-8cff-f7b6a2a4fc03", 00:13:48.191 "is_configured": true, 00:13:48.191 "data_offset": 0, 00:13:48.191 "data_size": 65536 00:13:48.191 } 00:13:48.191 ] 00:13:48.191 }' 00:13:48.191 03:16:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.191 03:16:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.450 153.00 IOPS, 459.00 MiB/s [2024-10-09T03:16:31.753Z] 03:16:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:48.450 03:16:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:48.450 03:16:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:48.450 03:16:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:48.450 03:16:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:48.450 03:16:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.450 03:16:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.450 03:16:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.450 03:16:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.709 03:16:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.709 03:16:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:48.709 "name": "raid_bdev1", 00:13:48.709 "uuid": "cce08514-2493-4424-820e-b418b043fa05", 00:13:48.709 "strip_size_kb": 0, 00:13:48.709 "state": "online", 00:13:48.709 "raid_level": "raid1", 00:13:48.709 "superblock": false, 00:13:48.709 "num_base_bdevs": 2, 00:13:48.709 "num_base_bdevs_discovered": 1, 00:13:48.709 "num_base_bdevs_operational": 1, 00:13:48.709 "base_bdevs_list": [ 00:13:48.709 { 00:13:48.709 "name": null, 00:13:48.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:48.709 "is_configured": false, 00:13:48.709 "data_offset": 0, 00:13:48.709 "data_size": 65536 00:13:48.709 }, 00:13:48.709 { 00:13:48.709 "name": "BaseBdev2", 00:13:48.709 "uuid": "a7d8cb7e-5df4-5f36-8cff-f7b6a2a4fc03", 00:13:48.709 "is_configured": true, 00:13:48.709 "data_offset": 0, 00:13:48.709 "data_size": 65536 00:13:48.709 } 00:13:48.709 ] 00:13:48.709 }' 00:13:48.709 03:16:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:48.709 03:16:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:48.709 03:16:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:48.709 03:16:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:48.709 03:16:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:13:48.709 03:16:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.709 03:16:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:48.709 [2024-10-09 03:16:31.869420] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:13:48.709 03:16:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.709 03:16:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:13:48.709 [2024-10-09 03:16:31.923462] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:13:48.709 [2024-10-09 03:16:31.925793] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:13:48.968 [2024-10-09 03:16:32.046862] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:48.968 [2024-10-09 03:16:32.047569] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:13:48.968 [2024-10-09 03:16:32.268908] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:48.968 [2024-10-09 03:16:32.269313] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:13:49.537 147.33 IOPS, 442.00 MiB/s [2024-10-09T03:16:32.840Z] [2024-10-09 03:16:32.583766] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:49.537 [2024-10-09 03:16:32.584645] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:13:49.537 [2024-10-09 03:16:32.799863] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:49.537 [2024-10-09 03:16:32.800391] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:13:49.796 03:16:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:49.796 03:16:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.796 03:16:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:49.796 03:16:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:49.796 03:16:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.796 03:16:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.796 03:16:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.796 03:16:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.796 03:16:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.796 03:16:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.796 03:16:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.796 "name": "raid_bdev1", 00:13:49.796 "uuid": "cce08514-2493-4424-820e-b418b043fa05", 00:13:49.796 "strip_size_kb": 0, 00:13:49.796 "state": "online", 00:13:49.796 "raid_level": "raid1", 00:13:49.796 "superblock": false, 00:13:49.796 "num_base_bdevs": 2, 00:13:49.796 "num_base_bdevs_discovered": 2, 00:13:49.796 "num_base_bdevs_operational": 2, 00:13:49.796 "process": { 00:13:49.796 "type": "rebuild", 00:13:49.796 "target": "spare", 00:13:49.796 "progress": { 00:13:49.796 "blocks": 10240, 00:13:49.796 "percent": 15 00:13:49.796 } 00:13:49.796 }, 00:13:49.796 "base_bdevs_list": [ 00:13:49.796 { 00:13:49.796 "name": "spare", 00:13:49.796 "uuid": "94674048-c7f3-5121-8675-586d7f0e8614", 00:13:49.796 "is_configured": true, 00:13:49.796 "data_offset": 0, 00:13:49.796 "data_size": 65536 00:13:49.796 }, 00:13:49.796 { 00:13:49.796 "name": "BaseBdev2", 00:13:49.796 "uuid": "a7d8cb7e-5df4-5f36-8cff-f7b6a2a4fc03", 00:13:49.796 "is_configured": true, 00:13:49.796 "data_offset": 0, 00:13:49.796 "data_size": 65536 00:13:49.796 } 00:13:49.796 ] 00:13:49.796 }' 00:13:49.796 03:16:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:49.796 03:16:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:49.796 03:16:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:49.796 03:16:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:49.796 03:16:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:13:49.796 03:16:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:13:49.796 03:16:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:13:49.796 03:16:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:13:49.796 03:16:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=420 00:13:49.796 03:16:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:49.796 03:16:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:49.796 03:16:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:49.796 03:16:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:49.796 03:16:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:49.796 03:16:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:49.796 03:16:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.796 03:16:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.796 03:16:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:49.796 03:16:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:49.796 03:16:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:49.796 03:16:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:49.796 "name": "raid_bdev1", 00:13:49.796 "uuid": "cce08514-2493-4424-820e-b418b043fa05", 00:13:49.796 "strip_size_kb": 0, 00:13:49.796 "state": "online", 00:13:49.796 "raid_level": "raid1", 00:13:49.796 "superblock": false, 00:13:49.796 "num_base_bdevs": 2, 00:13:49.796 "num_base_bdevs_discovered": 2, 00:13:49.796 "num_base_bdevs_operational": 2, 00:13:49.796 "process": { 00:13:49.796 "type": "rebuild", 00:13:49.796 "target": "spare", 00:13:49.796 "progress": { 00:13:49.796 "blocks": 12288, 00:13:49.796 "percent": 18 00:13:49.796 } 00:13:49.796 }, 00:13:49.796 "base_bdevs_list": [ 00:13:49.796 { 00:13:49.796 "name": "spare", 00:13:49.796 "uuid": "94674048-c7f3-5121-8675-586d7f0e8614", 00:13:49.796 "is_configured": true, 00:13:49.796 "data_offset": 0, 00:13:49.796 "data_size": 65536 00:13:49.796 }, 00:13:49.796 { 00:13:49.796 "name": "BaseBdev2", 00:13:49.796 "uuid": "a7d8cb7e-5df4-5f36-8cff-f7b6a2a4fc03", 00:13:49.796 "is_configured": true, 00:13:49.796 "data_offset": 0, 00:13:49.796 "data_size": 65536 00:13:49.796 } 00:13:49.796 ] 00:13:49.796 }' 00:13:49.796 03:16:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:50.056 03:16:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:50.056 03:16:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:50.056 03:16:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:50.056 03:16:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:50.056 [2024-10-09 03:16:33.180592] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:13:50.316 [2024-10-09 03:16:33.388829] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:50.316 [2024-10-09 03:16:33.389243] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:13:50.577 122.00 IOPS, 366.00 MiB/s [2024-10-09T03:16:33.880Z] [2024-10-09 03:16:33.624119] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:13:50.577 [2024-10-09 03:16:33.834334] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:13:50.836 [2024-10-09 03:16:34.060759] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:13:51.095 03:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:51.095 03:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:51.095 03:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:51.095 03:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:51.095 03:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:51.095 03:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:51.095 03:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.095 03:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.095 03:16:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:51.095 03:16:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:51.095 03:16:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:51.095 03:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:51.095 "name": "raid_bdev1", 00:13:51.095 "uuid": "cce08514-2493-4424-820e-b418b043fa05", 00:13:51.095 "strip_size_kb": 0, 00:13:51.095 "state": "online", 00:13:51.095 "raid_level": "raid1", 00:13:51.095 "superblock": false, 00:13:51.095 "num_base_bdevs": 2, 00:13:51.095 "num_base_bdevs_discovered": 2, 00:13:51.095 "num_base_bdevs_operational": 2, 00:13:51.095 "process": { 00:13:51.095 "type": "rebuild", 00:13:51.095 "target": "spare", 00:13:51.095 "progress": { 00:13:51.095 "blocks": 28672, 00:13:51.095 "percent": 43 00:13:51.095 } 00:13:51.095 }, 00:13:51.095 "base_bdevs_list": [ 00:13:51.095 { 00:13:51.095 "name": "spare", 00:13:51.095 "uuid": "94674048-c7f3-5121-8675-586d7f0e8614", 00:13:51.095 "is_configured": true, 00:13:51.095 "data_offset": 0, 00:13:51.095 "data_size": 65536 00:13:51.095 }, 00:13:51.095 { 00:13:51.095 "name": "BaseBdev2", 00:13:51.095 "uuid": "a7d8cb7e-5df4-5f36-8cff-f7b6a2a4fc03", 00:13:51.095 "is_configured": true, 00:13:51.095 "data_offset": 0, 00:13:51.095 "data_size": 65536 00:13:51.095 } 00:13:51.095 ] 00:13:51.095 }' 00:13:51.095 03:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:51.095 03:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:51.095 03:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:51.095 03:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:51.095 03:16:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:51.095 [2024-10-09 03:16:34.365949] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:51.095 [2024-10-09 03:16:34.366754] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:13:51.354 106.60 IOPS, 319.80 MiB/s [2024-10-09T03:16:34.658Z] [2024-10-09 03:16:34.582115] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:13:51.924 [2024-10-09 03:16:34.945060] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:13:52.184 03:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:52.184 03:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:52.184 03:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:52.184 03:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:52.184 03:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:52.184 03:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:52.184 03:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.184 03:16:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:52.184 03:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.184 03:16:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:52.184 03:16:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:52.184 03:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:52.184 "name": "raid_bdev1", 00:13:52.184 "uuid": "cce08514-2493-4424-820e-b418b043fa05", 00:13:52.184 "strip_size_kb": 0, 00:13:52.184 "state": "online", 00:13:52.184 "raid_level": "raid1", 00:13:52.184 "superblock": false, 00:13:52.184 "num_base_bdevs": 2, 00:13:52.184 "num_base_bdevs_discovered": 2, 00:13:52.184 "num_base_bdevs_operational": 2, 00:13:52.184 "process": { 00:13:52.184 "type": "rebuild", 00:13:52.184 "target": "spare", 00:13:52.184 "progress": { 00:13:52.184 "blocks": 43008, 00:13:52.184 "percent": 65 00:13:52.184 } 00:13:52.184 }, 00:13:52.184 "base_bdevs_list": [ 00:13:52.184 { 00:13:52.184 "name": "spare", 00:13:52.184 "uuid": "94674048-c7f3-5121-8675-586d7f0e8614", 00:13:52.184 "is_configured": true, 00:13:52.184 "data_offset": 0, 00:13:52.184 "data_size": 65536 00:13:52.184 }, 00:13:52.184 { 00:13:52.184 "name": "BaseBdev2", 00:13:52.184 "uuid": "a7d8cb7e-5df4-5f36-8cff-f7b6a2a4fc03", 00:13:52.184 "is_configured": true, 00:13:52.184 "data_offset": 0, 00:13:52.184 "data_size": 65536 00:13:52.184 } 00:13:52.184 ] 00:13:52.184 }' 00:13:52.184 03:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:52.184 [2024-10-09 03:16:35.378367] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:52.184 [2024-10-09 03:16:35.379108] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:13:52.184 03:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:52.184 03:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:52.184 03:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:52.184 03:16:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:53.011 96.50 IOPS, 289.50 MiB/s [2024-10-09T03:16:36.314Z] [2024-10-09 03:16:36.101430] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:53.011 [2024-10-09 03:16:36.102116] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 57344 offset_begin: 55296 offset_end: 61440 00:13:53.271 03:16:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:53.271 03:16:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:53.271 03:16:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:53.271 03:16:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:53.271 03:16:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:53.271 03:16:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:53.271 03:16:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:53.271 03:16:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:53.271 03:16:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.271 03:16:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:53.271 03:16:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.271 03:16:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:53.271 "name": "raid_bdev1", 00:13:53.271 "uuid": "cce08514-2493-4424-820e-b418b043fa05", 00:13:53.271 "strip_size_kb": 0, 00:13:53.271 "state": "online", 00:13:53.271 "raid_level": "raid1", 00:13:53.271 "superblock": false, 00:13:53.271 "num_base_bdevs": 2, 00:13:53.271 "num_base_bdevs_discovered": 2, 00:13:53.271 "num_base_bdevs_operational": 2, 00:13:53.271 "process": { 00:13:53.271 "type": "rebuild", 00:13:53.271 "target": "spare", 00:13:53.271 "progress": { 00:13:53.271 "blocks": 63488, 00:13:53.271 "percent": 96 00:13:53.271 } 00:13:53.271 }, 00:13:53.271 "base_bdevs_list": [ 00:13:53.271 { 00:13:53.271 "name": "spare", 00:13:53.271 "uuid": "94674048-c7f3-5121-8675-586d7f0e8614", 00:13:53.271 "is_configured": true, 00:13:53.271 "data_offset": 0, 00:13:53.271 "data_size": 65536 00:13:53.271 }, 00:13:53.271 { 00:13:53.271 "name": "BaseBdev2", 00:13:53.271 "uuid": "a7d8cb7e-5df4-5f36-8cff-f7b6a2a4fc03", 00:13:53.271 "is_configured": true, 00:13:53.271 "data_offset": 0, 00:13:53.271 "data_size": 65536 00:13:53.271 } 00:13:53.271 ] 00:13:53.271 }' 00:13:53.271 03:16:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:53.271 [2024-10-09 03:16:36.509420] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:13:53.271 03:16:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:13:53.271 03:16:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:53.271 89.14 IOPS, 267.43 MiB/s [2024-10-09T03:16:36.574Z] [2024-10-09 03:16:36.553256] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:13:53.271 [2024-10-09 03:16:36.557496] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:53.531 03:16:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:13:53.531 03:16:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:13:54.470 81.75 IOPS, 245.25 MiB/s [2024-10-09T03:16:37.773Z] 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:13:54.470 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:13:54.470 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.470 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:13:54.470 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:13:54.470 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.470 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.470 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.470 03:16:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.470 03:16:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.470 03:16:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.470 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.470 "name": "raid_bdev1", 00:13:54.470 "uuid": "cce08514-2493-4424-820e-b418b043fa05", 00:13:54.470 "strip_size_kb": 0, 00:13:54.470 "state": "online", 00:13:54.470 "raid_level": "raid1", 00:13:54.470 "superblock": false, 00:13:54.470 "num_base_bdevs": 2, 00:13:54.470 "num_base_bdevs_discovered": 2, 00:13:54.470 "num_base_bdevs_operational": 2, 00:13:54.470 "base_bdevs_list": [ 00:13:54.470 { 00:13:54.470 "name": "spare", 00:13:54.470 "uuid": "94674048-c7f3-5121-8675-586d7f0e8614", 00:13:54.470 "is_configured": true, 00:13:54.470 "data_offset": 0, 00:13:54.470 "data_size": 65536 00:13:54.470 }, 00:13:54.470 { 00:13:54.470 "name": "BaseBdev2", 00:13:54.470 "uuid": "a7d8cb7e-5df4-5f36-8cff-f7b6a2a4fc03", 00:13:54.470 "is_configured": true, 00:13:54.470 "data_offset": 0, 00:13:54.470 "data_size": 65536 00:13:54.470 } 00:13:54.470 ] 00:13:54.470 }' 00:13:54.470 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.470 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:13:54.470 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.470 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:13:54.470 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:13:54.470 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:13:54.470 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:13:54.470 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:13:54.470 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:13:54.470 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:13:54.470 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.470 03:16:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.470 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.470 03:16:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.470 03:16:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.470 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:13:54.470 "name": "raid_bdev1", 00:13:54.470 "uuid": "cce08514-2493-4424-820e-b418b043fa05", 00:13:54.470 "strip_size_kb": 0, 00:13:54.470 "state": "online", 00:13:54.470 "raid_level": "raid1", 00:13:54.470 "superblock": false, 00:13:54.470 "num_base_bdevs": 2, 00:13:54.470 "num_base_bdevs_discovered": 2, 00:13:54.470 "num_base_bdevs_operational": 2, 00:13:54.470 "base_bdevs_list": [ 00:13:54.470 { 00:13:54.470 "name": "spare", 00:13:54.470 "uuid": "94674048-c7f3-5121-8675-586d7f0e8614", 00:13:54.470 "is_configured": true, 00:13:54.470 "data_offset": 0, 00:13:54.470 "data_size": 65536 00:13:54.470 }, 00:13:54.470 { 00:13:54.470 "name": "BaseBdev2", 00:13:54.470 "uuid": "a7d8cb7e-5df4-5f36-8cff-f7b6a2a4fc03", 00:13:54.470 "is_configured": true, 00:13:54.470 "data_offset": 0, 00:13:54.470 "data_size": 65536 00:13:54.470 } 00:13:54.470 ] 00:13:54.470 }' 00:13:54.470 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:13:54.729 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:13:54.729 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:13:54.729 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:13:54.729 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:54.729 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:54.729 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:54.729 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:54.729 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:54.729 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:54.729 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:54.729 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:54.729 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:54.729 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:54.729 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:54.729 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:54.729 03:16:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.729 03:16:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.729 03:16:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:54.729 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:54.729 "name": "raid_bdev1", 00:13:54.729 "uuid": "cce08514-2493-4424-820e-b418b043fa05", 00:13:54.729 "strip_size_kb": 0, 00:13:54.729 "state": "online", 00:13:54.729 "raid_level": "raid1", 00:13:54.729 "superblock": false, 00:13:54.729 "num_base_bdevs": 2, 00:13:54.729 "num_base_bdevs_discovered": 2, 00:13:54.729 "num_base_bdevs_operational": 2, 00:13:54.729 "base_bdevs_list": [ 00:13:54.729 { 00:13:54.729 "name": "spare", 00:13:54.729 "uuid": "94674048-c7f3-5121-8675-586d7f0e8614", 00:13:54.729 "is_configured": true, 00:13:54.729 "data_offset": 0, 00:13:54.729 "data_size": 65536 00:13:54.729 }, 00:13:54.729 { 00:13:54.729 "name": "BaseBdev2", 00:13:54.729 "uuid": "a7d8cb7e-5df4-5f36-8cff-f7b6a2a4fc03", 00:13:54.729 "is_configured": true, 00:13:54.729 "data_offset": 0, 00:13:54.729 "data_size": 65536 00:13:54.729 } 00:13:54.729 ] 00:13:54.729 }' 00:13:54.729 03:16:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:54.729 03:16:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.988 03:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:54.988 03:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:54.988 03:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:54.988 [2024-10-09 03:16:38.246167] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:54.988 [2024-10-09 03:16:38.246310] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:54.988 00:13:54.988 Latency(us) 00:13:54.988 [2024-10-09T03:16:38.291Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:54.988 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:13:54.988 raid_bdev1 : 8.74 77.70 233.11 0.00 0.00 17424.64 338.05 110352.32 00:13:54.988 [2024-10-09T03:16:38.291Z] =================================================================================================================== 00:13:54.988 [2024-10-09T03:16:38.291Z] Total : 77.70 233.11 0.00 0.00 17424.64 338.05 110352.32 00:13:55.249 { 00:13:55.249 "results": [ 00:13:55.249 { 00:13:55.249 "job": "raid_bdev1", 00:13:55.249 "core_mask": "0x1", 00:13:55.249 "workload": "randrw", 00:13:55.249 "percentage": 50, 00:13:55.249 "status": "finished", 00:13:55.249 "queue_depth": 2, 00:13:55.249 "io_size": 3145728, 00:13:55.249 "runtime": 8.738209, 00:13:55.249 "iops": 77.70471042750293, 00:13:55.249 "mibps": 233.1141312825088, 00:13:55.249 "io_failed": 0, 00:13:55.249 "io_timeout": 0, 00:13:55.249 "avg_latency_us": 17424.6444205774, 00:13:55.249 "min_latency_us": 338.05414847161575, 00:13:55.249 "max_latency_us": 110352.32139737991 00:13:55.249 } 00:13:55.249 ], 00:13:55.249 "core_count": 1 00:13:55.249 } 00:13:55.249 [2024-10-09 03:16:38.298886] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.249 [2024-10-09 03:16:38.298932] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:55.249 [2024-10-09 03:16:38.299009] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:55.249 [2024-10-09 03:16:38.299024] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:55.249 03:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.249 03:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.249 03:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:55.249 03:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:55.249 03:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:13:55.249 03:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:55.249 03:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:13:55.249 03:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:13:55.249 03:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:13:55.249 03:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:13:55.249 03:16:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:55.249 03:16:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:13:55.249 03:16:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:55.249 03:16:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:13:55.249 03:16:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:55.249 03:16:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:55.249 03:16:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:55.249 03:16:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:55.249 03:16:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:13:55.509 /dev/nbd0 00:13:55.509 03:16:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:13:55.509 03:16:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:13:55.509 03:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:13:55.509 03:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:13:55.509 03:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:55.509 03:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:55.509 03:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:13:55.509 03:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:13:55.509 03:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:55.509 03:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:55.509 03:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:55.509 1+0 records in 00:13:55.509 1+0 records out 00:13:55.509 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000325279 s, 12.6 MB/s 00:13:55.509 03:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.509 03:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:13:55.509 03:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.509 03:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:55.509 03:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:13:55.509 03:16:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:55.509 03:16:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:55.509 03:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:13:55.509 03:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:13:55.509 03:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:13:55.509 03:16:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:13:55.509 03:16:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:13:55.509 03:16:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:13:55.509 03:16:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:13:55.509 03:16:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:13:55.509 03:16:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:13:55.509 03:16:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:13:55.509 03:16:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:55.509 03:16:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:13:55.769 /dev/nbd1 00:13:55.769 03:16:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:13:55.769 03:16:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:13:55.769 03:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:13:55.769 03:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:13:55.769 03:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:13:55.769 03:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:13:55.769 03:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:13:55.769 03:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:13:55.769 03:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:13:55.769 03:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:13:55.769 03:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:13:55.769 1+0 records in 00:13:55.769 1+0 records out 00:13:55.769 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018223 s, 22.5 MB/s 00:13:55.769 03:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.769 03:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:13:55.769 03:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:13:55.769 03:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:13:55.769 03:16:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:13:55.769 03:16:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:13:55.769 03:16:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:13:55.769 03:16:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:13:55.769 03:16:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:13:55.769 03:16:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:55.769 03:16:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:13:55.769 03:16:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:55.769 03:16:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:55.769 03:16:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:55.769 03:16:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:13:56.029 03:16:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:13:56.029 03:16:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:13:56.029 03:16:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:13:56.029 03:16:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:56.029 03:16:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:56.029 03:16:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:13:56.029 03:16:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:56.029 03:16:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:56.029 03:16:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:13:56.029 03:16:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:13:56.029 03:16:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:13:56.029 03:16:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:13:56.029 03:16:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:13:56.029 03:16:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:13:56.029 03:16:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:13:56.289 03:16:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:13:56.289 03:16:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:13:56.289 03:16:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:13:56.289 03:16:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:13:56.289 03:16:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:13:56.289 03:16:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:13:56.289 03:16:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:13:56.289 03:16:39 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:13:56.289 03:16:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:13:56.289 03:16:39 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76707 00:13:56.289 03:16:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 76707 ']' 00:13:56.289 03:16:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 76707 00:13:56.289 03:16:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:13:56.289 03:16:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:56.289 03:16:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 76707 00:13:56.289 03:16:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:56.289 03:16:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:56.289 03:16:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 76707' 00:13:56.289 killing process with pid 76707 00:13:56.289 03:16:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 76707 00:13:56.289 Received shutdown signal, test time was about 9.978852 seconds 00:13:56.289 00:13:56.289 Latency(us) 00:13:56.289 [2024-10-09T03:16:39.592Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:56.289 [2024-10-09T03:16:39.592Z] =================================================================================================================== 00:13:56.289 [2024-10-09T03:16:39.592Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:13:56.289 [2024-10-09 03:16:39.516382] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:56.289 03:16:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 76707 00:13:56.549 [2024-10-09 03:16:39.758180] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:57.934 ************************************ 00:13:57.934 END TEST raid_rebuild_test_io 00:13:57.934 ************************************ 00:13:57.934 03:16:41 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:13:57.934 00:13:57.934 real 0m13.397s 00:13:57.934 user 0m16.384s 00:13:57.934 sys 0m1.627s 00:13:57.934 03:16:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:57.934 03:16:41 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:13:57.934 03:16:41 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:13:57.934 03:16:41 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:13:57.934 03:16:41 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:57.934 03:16:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:57.934 ************************************ 00:13:57.934 START TEST raid_rebuild_test_sb_io 00:13:57.934 ************************************ 00:13:57.934 03:16:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true true true 00:13:57.934 03:16:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:13:57.934 03:16:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:13:57.934 03:16:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:13:57.934 03:16:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:13:57.934 03:16:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:13:57.934 03:16:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:13:57.934 03:16:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:57.934 03:16:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:13:57.934 03:16:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:57.934 03:16:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:57.934 03:16:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:13:57.934 03:16:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:13:57.934 03:16:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:13:57.934 03:16:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:13:57.934 03:16:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:13:57.934 03:16:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:13:57.934 03:16:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:13:57.934 03:16:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:13:57.934 03:16:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:13:57.934 03:16:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:13:57.934 03:16:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:13:57.934 03:16:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:13:57.934 03:16:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:13:57.934 03:16:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:13:57.934 03:16:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=77103 00:13:57.934 03:16:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:13:57.934 03:16:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 77103 00:13:57.934 03:16:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 77103 ']' 00:13:57.934 03:16:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.934 03:16:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:57.934 03:16:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.934 03:16:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:57.934 03:16:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.194 [2024-10-09 03:16:41.318772] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:13:58.194 [2024-10-09 03:16:41.318980] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:13:58.194 Zero copy mechanism will not be used. 00:13:58.194 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77103 ] 00:13:58.194 [2024-10-09 03:16:41.464423] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:58.453 [2024-10-09 03:16:41.703234] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.713 [2024-10-09 03:16:41.929889] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:58.713 [2024-10-09 03:16:41.929988] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:58.973 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:58.973 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:13:58.973 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:58.973 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:58.973 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.973 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.973 BaseBdev1_malloc 00:13:58.973 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.973 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:13:58.973 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.973 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:58.973 [2024-10-09 03:16:42.207577] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:13:58.973 [2024-10-09 03:16:42.207731] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:58.973 [2024-10-09 03:16:42.207775] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:58.973 [2024-10-09 03:16:42.207812] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:58.973 [2024-10-09 03:16:42.210211] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:58.973 [2024-10-09 03:16:42.210286] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:58.973 BaseBdev1 00:13:58.973 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.973 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:13:58.973 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:58.973 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.973 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.233 BaseBdev2_malloc 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.233 [2024-10-09 03:16:42.300974] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:13:59.233 [2024-10-09 03:16:42.301039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.233 [2024-10-09 03:16:42.301060] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:59.233 [2024-10-09 03:16:42.301072] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.233 [2024-10-09 03:16:42.303394] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.233 [2024-10-09 03:16:42.303432] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:59.233 BaseBdev2 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.233 spare_malloc 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.233 spare_delay 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.233 [2024-10-09 03:16:42.375354] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:13:59.233 [2024-10-09 03:16:42.375417] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:59.233 [2024-10-09 03:16:42.375437] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:13:59.233 [2024-10-09 03:16:42.375449] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:59.233 [2024-10-09 03:16:42.378030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:59.233 [2024-10-09 03:16:42.378090] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:13:59.233 spare 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.233 [2024-10-09 03:16:42.387402] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:59.233 [2024-10-09 03:16:42.389510] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:59.233 [2024-10-09 03:16:42.389739] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:59.233 [2024-10-09 03:16:42.389788] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:59.233 [2024-10-09 03:16:42.390069] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:13:59.233 [2024-10-09 03:16:42.390278] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:59.233 [2024-10-09 03:16:42.390318] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:59.233 [2024-10-09 03:16:42.390508] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.233 "name": "raid_bdev1", 00:13:59.233 "uuid": "9d6d4b06-efe3-4a1d-9889-763bf095eb4d", 00:13:59.233 "strip_size_kb": 0, 00:13:59.233 "state": "online", 00:13:59.233 "raid_level": "raid1", 00:13:59.233 "superblock": true, 00:13:59.233 "num_base_bdevs": 2, 00:13:59.233 "num_base_bdevs_discovered": 2, 00:13:59.233 "num_base_bdevs_operational": 2, 00:13:59.233 "base_bdevs_list": [ 00:13:59.233 { 00:13:59.233 "name": "BaseBdev1", 00:13:59.233 "uuid": "11260405-446e-56d2-8f61-aadd2f8690e6", 00:13:59.233 "is_configured": true, 00:13:59.233 "data_offset": 2048, 00:13:59.233 "data_size": 63488 00:13:59.233 }, 00:13:59.233 { 00:13:59.233 "name": "BaseBdev2", 00:13:59.233 "uuid": "cfeb5ab8-3a7a-5d00-a5a5-ae3d92baeab4", 00:13:59.233 "is_configured": true, 00:13:59.233 "data_offset": 2048, 00:13:59.233 "data_size": 63488 00:13:59.233 } 00:13:59.233 ] 00:13:59.233 }' 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.233 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.803 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:59.803 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.803 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.803 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:13:59.803 [2024-10-09 03:16:42.846904] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:59.803 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.803 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:13:59.803 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.803 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.803 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:13:59.803 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.803 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.803 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:13:59.803 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:13:59.803 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:59.803 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:13:59.803 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.803 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.803 [2024-10-09 03:16:42.958360] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:59.803 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.803 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:13:59.803 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:59.803 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:59.803 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:59.803 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:59.803 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:13:59.803 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:59.803 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:59.803 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:59.803 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:59.803 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:59.803 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:59.803 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.803 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.803 03:16:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.803 03:16:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:59.803 "name": "raid_bdev1", 00:13:59.803 "uuid": "9d6d4b06-efe3-4a1d-9889-763bf095eb4d", 00:13:59.803 "strip_size_kb": 0, 00:13:59.803 "state": "online", 00:13:59.803 "raid_level": "raid1", 00:13:59.803 "superblock": true, 00:13:59.803 "num_base_bdevs": 2, 00:13:59.803 "num_base_bdevs_discovered": 1, 00:13:59.803 "num_base_bdevs_operational": 1, 00:13:59.803 "base_bdevs_list": [ 00:13:59.803 { 00:13:59.803 "name": null, 00:13:59.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:59.803 "is_configured": false, 00:13:59.803 "data_offset": 0, 00:13:59.803 "data_size": 63488 00:13:59.803 }, 00:13:59.803 { 00:13:59.803 "name": "BaseBdev2", 00:13:59.803 "uuid": "cfeb5ab8-3a7a-5d00-a5a5-ae3d92baeab4", 00:13:59.803 "is_configured": true, 00:13:59.803 "data_offset": 2048, 00:13:59.803 "data_size": 63488 00:13:59.803 } 00:13:59.803 ] 00:13:59.803 }' 00:13:59.803 03:16:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:59.803 03:16:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:13:59.803 [2024-10-09 03:16:43.039704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:59.803 I/O size of 3145728 is greater than zero copy threshold (65536). 00:13:59.803 Zero copy mechanism will not be used. 00:13:59.803 Running I/O for 60 seconds... 00:14:00.062 03:16:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:00.062 03:16:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.322 03:16:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:00.322 [2024-10-09 03:16:43.377390] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:00.322 03:16:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.322 03:16:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:00.322 [2024-10-09 03:16:43.427444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:00.322 [2024-10-09 03:16:43.429614] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:00.322 [2024-10-09 03:16:43.542037] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:00.322 [2024-10-09 03:16:43.542856] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:00.581 [2024-10-09 03:16:43.758017] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:00.581 [2024-10-09 03:16:43.758475] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:00.840 191.00 IOPS, 573.00 MiB/s [2024-10-09T03:16:44.143Z] [2024-10-09 03:16:44.091396] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:01.100 [2024-10-09 03:16:44.201293] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:01.360 03:16:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:01.360 03:16:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:01.360 03:16:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:01.360 03:16:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:01.360 03:16:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:01.360 03:16:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.360 03:16:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.360 03:16:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.360 03:16:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.360 [2024-10-09 03:16:44.450065] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:01.360 03:16:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.360 03:16:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:01.360 "name": "raid_bdev1", 00:14:01.360 "uuid": "9d6d4b06-efe3-4a1d-9889-763bf095eb4d", 00:14:01.360 "strip_size_kb": 0, 00:14:01.360 "state": "online", 00:14:01.360 "raid_level": "raid1", 00:14:01.360 "superblock": true, 00:14:01.360 "num_base_bdevs": 2, 00:14:01.360 "num_base_bdevs_discovered": 2, 00:14:01.360 "num_base_bdevs_operational": 2, 00:14:01.360 "process": { 00:14:01.360 "type": "rebuild", 00:14:01.360 "target": "spare", 00:14:01.360 "progress": { 00:14:01.360 "blocks": 12288, 00:14:01.360 "percent": 19 00:14:01.360 } 00:14:01.360 }, 00:14:01.360 "base_bdevs_list": [ 00:14:01.360 { 00:14:01.360 "name": "spare", 00:14:01.360 "uuid": "c381afeb-912b-5550-92b3-7e5ff1ffc3aa", 00:14:01.360 "is_configured": true, 00:14:01.360 "data_offset": 2048, 00:14:01.360 "data_size": 63488 00:14:01.360 }, 00:14:01.360 { 00:14:01.360 "name": "BaseBdev2", 00:14:01.360 "uuid": "cfeb5ab8-3a7a-5d00-a5a5-ae3d92baeab4", 00:14:01.360 "is_configured": true, 00:14:01.360 "data_offset": 2048, 00:14:01.360 "data_size": 63488 00:14:01.360 } 00:14:01.360 ] 00:14:01.360 }' 00:14:01.360 03:16:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:01.360 03:16:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:01.360 03:16:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:01.360 03:16:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:01.360 03:16:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:01.360 03:16:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.360 03:16:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.360 [2024-10-09 03:16:44.579991] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:01.621 [2024-10-09 03:16:44.666599] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:14:01.621 [2024-10-09 03:16:44.709858] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:01.621 [2024-10-09 03:16:44.718502] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:01.621 [2024-10-09 03:16:44.718587] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:01.621 [2024-10-09 03:16:44.718609] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:01.621 [2024-10-09 03:16:44.764733] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:14:01.621 03:16:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.621 03:16:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:01.621 03:16:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:01.621 03:16:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:01.621 03:16:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:01.621 03:16:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:01.621 03:16:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:01.621 03:16:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:01.621 03:16:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:01.621 03:16:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:01.621 03:16:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:01.621 03:16:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:01.621 03:16:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:01.621 03:16:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.621 03:16:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:01.621 03:16:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.621 03:16:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:01.621 "name": "raid_bdev1", 00:14:01.621 "uuid": "9d6d4b06-efe3-4a1d-9889-763bf095eb4d", 00:14:01.621 "strip_size_kb": 0, 00:14:01.621 "state": "online", 00:14:01.621 "raid_level": "raid1", 00:14:01.621 "superblock": true, 00:14:01.621 "num_base_bdevs": 2, 00:14:01.621 "num_base_bdevs_discovered": 1, 00:14:01.621 "num_base_bdevs_operational": 1, 00:14:01.621 "base_bdevs_list": [ 00:14:01.621 { 00:14:01.621 "name": null, 00:14:01.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:01.621 "is_configured": false, 00:14:01.621 "data_offset": 0, 00:14:01.621 "data_size": 63488 00:14:01.621 }, 00:14:01.621 { 00:14:01.621 "name": "BaseBdev2", 00:14:01.621 "uuid": "cfeb5ab8-3a7a-5d00-a5a5-ae3d92baeab4", 00:14:01.621 "is_configured": true, 00:14:01.621 "data_offset": 2048, 00:14:01.621 "data_size": 63488 00:14:01.621 } 00:14:01.621 ] 00:14:01.621 }' 00:14:01.621 03:16:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:01.621 03:16:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.141 170.00 IOPS, 510.00 MiB/s [2024-10-09T03:16:45.444Z] 03:16:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:02.141 03:16:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:02.141 03:16:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:02.141 03:16:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:02.141 03:16:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:02.141 03:16:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.141 03:16:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.141 03:16:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.141 03:16:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.141 03:16:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.141 03:16:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:02.141 "name": "raid_bdev1", 00:14:02.141 "uuid": "9d6d4b06-efe3-4a1d-9889-763bf095eb4d", 00:14:02.141 "strip_size_kb": 0, 00:14:02.141 "state": "online", 00:14:02.141 "raid_level": "raid1", 00:14:02.141 "superblock": true, 00:14:02.141 "num_base_bdevs": 2, 00:14:02.141 "num_base_bdevs_discovered": 1, 00:14:02.141 "num_base_bdevs_operational": 1, 00:14:02.141 "base_bdevs_list": [ 00:14:02.141 { 00:14:02.141 "name": null, 00:14:02.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.141 "is_configured": false, 00:14:02.141 "data_offset": 0, 00:14:02.141 "data_size": 63488 00:14:02.141 }, 00:14:02.141 { 00:14:02.141 "name": "BaseBdev2", 00:14:02.141 "uuid": "cfeb5ab8-3a7a-5d00-a5a5-ae3d92baeab4", 00:14:02.141 "is_configured": true, 00:14:02.141 "data_offset": 2048, 00:14:02.141 "data_size": 63488 00:14:02.141 } 00:14:02.141 ] 00:14:02.141 }' 00:14:02.141 03:16:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:02.141 03:16:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:02.141 03:16:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:02.141 03:16:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:02.141 03:16:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:02.141 03:16:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.141 03:16:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:02.141 [2024-10-09 03:16:45.357990] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:02.141 03:16:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.141 03:16:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:02.141 [2024-10-09 03:16:45.407233] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:02.141 [2024-10-09 03:16:45.409482] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:02.400 [2024-10-09 03:16:45.517634] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:02.400 [2024-10-09 03:16:45.518526] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:02.400 [2024-10-09 03:16:45.620493] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:02.400 [2024-10-09 03:16:45.620882] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:02.660 [2024-10-09 03:16:45.872277] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:02.660 [2024-10-09 03:16:45.873318] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:02.919 165.00 IOPS, 495.00 MiB/s [2024-10-09T03:16:46.222Z] [2024-10-09 03:16:46.089879] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:03.179 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:03.179 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.179 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:03.179 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:03.179 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.179 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.179 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.179 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.179 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.179 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.179 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.179 "name": "raid_bdev1", 00:14:03.179 "uuid": "9d6d4b06-efe3-4a1d-9889-763bf095eb4d", 00:14:03.179 "strip_size_kb": 0, 00:14:03.179 "state": "online", 00:14:03.179 "raid_level": "raid1", 00:14:03.179 "superblock": true, 00:14:03.179 "num_base_bdevs": 2, 00:14:03.179 "num_base_bdevs_discovered": 2, 00:14:03.179 "num_base_bdevs_operational": 2, 00:14:03.179 "process": { 00:14:03.179 "type": "rebuild", 00:14:03.179 "target": "spare", 00:14:03.179 "progress": { 00:14:03.179 "blocks": 14336, 00:14:03.179 "percent": 22 00:14:03.179 } 00:14:03.179 }, 00:14:03.179 "base_bdevs_list": [ 00:14:03.179 { 00:14:03.179 "name": "spare", 00:14:03.179 "uuid": "c381afeb-912b-5550-92b3-7e5ff1ffc3aa", 00:14:03.179 "is_configured": true, 00:14:03.179 "data_offset": 2048, 00:14:03.179 "data_size": 63488 00:14:03.179 }, 00:14:03.179 { 00:14:03.179 "name": "BaseBdev2", 00:14:03.179 "uuid": "cfeb5ab8-3a7a-5d00-a5a5-ae3d92baeab4", 00:14:03.179 "is_configured": true, 00:14:03.179 "data_offset": 2048, 00:14:03.179 "data_size": 63488 00:14:03.179 } 00:14:03.179 ] 00:14:03.179 }' 00:14:03.179 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.439 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:03.439 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.439 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:03.439 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:03.439 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:03.439 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:03.439 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:03.439 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:03.439 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:03.439 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=433 00:14:03.439 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:03.439 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:03.439 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:03.439 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:03.439 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:03.439 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:03.439 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:03.439 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.439 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.439 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:03.439 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.439 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:03.439 "name": "raid_bdev1", 00:14:03.439 "uuid": "9d6d4b06-efe3-4a1d-9889-763bf095eb4d", 00:14:03.439 "strip_size_kb": 0, 00:14:03.439 "state": "online", 00:14:03.439 "raid_level": "raid1", 00:14:03.439 "superblock": true, 00:14:03.439 "num_base_bdevs": 2, 00:14:03.439 "num_base_bdevs_discovered": 2, 00:14:03.439 "num_base_bdevs_operational": 2, 00:14:03.439 "process": { 00:14:03.439 "type": "rebuild", 00:14:03.439 "target": "spare", 00:14:03.439 "progress": { 00:14:03.439 "blocks": 16384, 00:14:03.439 "percent": 25 00:14:03.439 } 00:14:03.439 }, 00:14:03.439 "base_bdevs_list": [ 00:14:03.439 { 00:14:03.439 "name": "spare", 00:14:03.439 "uuid": "c381afeb-912b-5550-92b3-7e5ff1ffc3aa", 00:14:03.439 "is_configured": true, 00:14:03.439 "data_offset": 2048, 00:14:03.439 "data_size": 63488 00:14:03.439 }, 00:14:03.439 { 00:14:03.439 "name": "BaseBdev2", 00:14:03.439 "uuid": "cfeb5ab8-3a7a-5d00-a5a5-ae3d92baeab4", 00:14:03.439 "is_configured": true, 00:14:03.439 "data_offset": 2048, 00:14:03.439 "data_size": 63488 00:14:03.439 } 00:14:03.439 ] 00:14:03.439 }' 00:14:03.439 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:03.439 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:03.439 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:03.439 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:03.439 03:16:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:03.439 [2024-10-09 03:16:46.689453] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:03.699 [2024-10-09 03:16:46.897596] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:03.699 [2024-10-09 03:16:46.898126] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:03.958 144.25 IOPS, 432.75 MiB/s [2024-10-09T03:16:47.261Z] [2024-10-09 03:16:47.243180] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:04.528 [2024-10-09 03:16:47.569765] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:14:04.528 03:16:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:04.528 03:16:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:04.528 03:16:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:04.528 03:16:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:04.528 03:16:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:04.528 03:16:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:04.528 03:16:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:04.528 03:16:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:04.528 03:16:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.528 03:16:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:04.528 03:16:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.528 03:16:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:04.528 "name": "raid_bdev1", 00:14:04.528 "uuid": "9d6d4b06-efe3-4a1d-9889-763bf095eb4d", 00:14:04.528 "strip_size_kb": 0, 00:14:04.528 "state": "online", 00:14:04.528 "raid_level": "raid1", 00:14:04.528 "superblock": true, 00:14:04.528 "num_base_bdevs": 2, 00:14:04.528 "num_base_bdevs_discovered": 2, 00:14:04.528 "num_base_bdevs_operational": 2, 00:14:04.528 "process": { 00:14:04.528 "type": "rebuild", 00:14:04.529 "target": "spare", 00:14:04.529 "progress": { 00:14:04.529 "blocks": 32768, 00:14:04.529 "percent": 51 00:14:04.529 } 00:14:04.529 }, 00:14:04.529 "base_bdevs_list": [ 00:14:04.529 { 00:14:04.529 "name": "spare", 00:14:04.529 "uuid": "c381afeb-912b-5550-92b3-7e5ff1ffc3aa", 00:14:04.529 "is_configured": true, 00:14:04.529 "data_offset": 2048, 00:14:04.529 "data_size": 63488 00:14:04.529 }, 00:14:04.529 { 00:14:04.529 "name": "BaseBdev2", 00:14:04.529 "uuid": "cfeb5ab8-3a7a-5d00-a5a5-ae3d92baeab4", 00:14:04.529 "is_configured": true, 00:14:04.529 "data_offset": 2048, 00:14:04.529 "data_size": 63488 00:14:04.529 } 00:14:04.529 ] 00:14:04.529 }' 00:14:04.529 03:16:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:04.529 03:16:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:04.529 03:16:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:04.529 03:16:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:04.529 03:16:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:05.356 125.80 IOPS, 377.40 MiB/s [2024-10-09T03:16:48.659Z] [2024-10-09 03:16:48.353338] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:14:05.614 [2024-10-09 03:16:48.781424] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:14:05.614 03:16:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:05.614 03:16:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:05.614 03:16:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:05.614 03:16:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:05.614 03:16:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:05.614 03:16:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:05.614 03:16:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.614 03:16:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.614 03:16:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:05.614 03:16:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.614 03:16:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.614 03:16:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:05.614 "name": "raid_bdev1", 00:14:05.614 "uuid": "9d6d4b06-efe3-4a1d-9889-763bf095eb4d", 00:14:05.614 "strip_size_kb": 0, 00:14:05.614 "state": "online", 00:14:05.614 "raid_level": "raid1", 00:14:05.614 "superblock": true, 00:14:05.614 "num_base_bdevs": 2, 00:14:05.614 "num_base_bdevs_discovered": 2, 00:14:05.614 "num_base_bdevs_operational": 2, 00:14:05.614 "process": { 00:14:05.614 "type": "rebuild", 00:14:05.614 "target": "spare", 00:14:05.614 "progress": { 00:14:05.614 "blocks": 53248, 00:14:05.614 "percent": 83 00:14:05.614 } 00:14:05.614 }, 00:14:05.614 "base_bdevs_list": [ 00:14:05.614 { 00:14:05.614 "name": "spare", 00:14:05.614 "uuid": "c381afeb-912b-5550-92b3-7e5ff1ffc3aa", 00:14:05.614 "is_configured": true, 00:14:05.614 "data_offset": 2048, 00:14:05.614 "data_size": 63488 00:14:05.614 }, 00:14:05.614 { 00:14:05.614 "name": "BaseBdev2", 00:14:05.614 "uuid": "cfeb5ab8-3a7a-5d00-a5a5-ae3d92baeab4", 00:14:05.614 "is_configured": true, 00:14:05.614 "data_offset": 2048, 00:14:05.614 "data_size": 63488 00:14:05.614 } 00:14:05.615 ] 00:14:05.615 }' 00:14:05.615 03:16:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:05.615 03:16:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:05.615 03:16:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:05.874 03:16:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:05.874 03:16:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:05.874 113.17 IOPS, 339.50 MiB/s [2024-10-09T03:16:49.177Z] [2024-10-09 03:16:49.107437] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:06.133 [2024-10-09 03:16:49.334740] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:06.393 [2024-10-09 03:16:49.440057] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:06.393 [2024-10-09 03:16:49.445353] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.961 03:16:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:06.961 03:16:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:06.961 03:16:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.961 03:16:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:06.961 03:16:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:06.961 03:16:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.961 03:16:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.961 03:16:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.961 03:16:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.961 03:16:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.961 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.961 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.961 "name": "raid_bdev1", 00:14:06.961 "uuid": "9d6d4b06-efe3-4a1d-9889-763bf095eb4d", 00:14:06.961 "strip_size_kb": 0, 00:14:06.961 "state": "online", 00:14:06.961 "raid_level": "raid1", 00:14:06.961 "superblock": true, 00:14:06.961 "num_base_bdevs": 2, 00:14:06.961 "num_base_bdevs_discovered": 2, 00:14:06.961 "num_base_bdevs_operational": 2, 00:14:06.961 "base_bdevs_list": [ 00:14:06.961 { 00:14:06.961 "name": "spare", 00:14:06.961 "uuid": "c381afeb-912b-5550-92b3-7e5ff1ffc3aa", 00:14:06.961 "is_configured": true, 00:14:06.961 "data_offset": 2048, 00:14:06.961 "data_size": 63488 00:14:06.961 }, 00:14:06.961 { 00:14:06.961 "name": "BaseBdev2", 00:14:06.961 "uuid": "cfeb5ab8-3a7a-5d00-a5a5-ae3d92baeab4", 00:14:06.961 "is_configured": true, 00:14:06.961 "data_offset": 2048, 00:14:06.961 "data_size": 63488 00:14:06.961 } 00:14:06.961 ] 00:14:06.961 }' 00:14:06.961 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.961 102.71 IOPS, 308.14 MiB/s [2024-10-09T03:16:50.264Z] 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:06.961 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.961 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:06.961 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:14:06.961 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:06.961 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:06.961 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:06.961 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:06.961 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:06.961 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.961 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.961 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.961 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.961 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.961 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:06.961 "name": "raid_bdev1", 00:14:06.961 "uuid": "9d6d4b06-efe3-4a1d-9889-763bf095eb4d", 00:14:06.961 "strip_size_kb": 0, 00:14:06.961 "state": "online", 00:14:06.961 "raid_level": "raid1", 00:14:06.961 "superblock": true, 00:14:06.961 "num_base_bdevs": 2, 00:14:06.961 "num_base_bdevs_discovered": 2, 00:14:06.961 "num_base_bdevs_operational": 2, 00:14:06.961 "base_bdevs_list": [ 00:14:06.961 { 00:14:06.961 "name": "spare", 00:14:06.961 "uuid": "c381afeb-912b-5550-92b3-7e5ff1ffc3aa", 00:14:06.961 "is_configured": true, 00:14:06.961 "data_offset": 2048, 00:14:06.961 "data_size": 63488 00:14:06.961 }, 00:14:06.961 { 00:14:06.961 "name": "BaseBdev2", 00:14:06.961 "uuid": "cfeb5ab8-3a7a-5d00-a5a5-ae3d92baeab4", 00:14:06.961 "is_configured": true, 00:14:06.961 "data_offset": 2048, 00:14:06.961 "data_size": 63488 00:14:06.961 } 00:14:06.961 ] 00:14:06.961 }' 00:14:06.961 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:06.961 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:06.961 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:06.961 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:06.961 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:06.961 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:06.961 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:06.961 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:06.961 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:06.961 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:06.961 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:06.961 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:06.961 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:06.961 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:06.961 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:06.961 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.961 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.961 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:06.961 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.220 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:07.220 "name": "raid_bdev1", 00:14:07.220 "uuid": "9d6d4b06-efe3-4a1d-9889-763bf095eb4d", 00:14:07.220 "strip_size_kb": 0, 00:14:07.220 "state": "online", 00:14:07.220 "raid_level": "raid1", 00:14:07.220 "superblock": true, 00:14:07.220 "num_base_bdevs": 2, 00:14:07.220 "num_base_bdevs_discovered": 2, 00:14:07.220 "num_base_bdevs_operational": 2, 00:14:07.220 "base_bdevs_list": [ 00:14:07.220 { 00:14:07.220 "name": "spare", 00:14:07.220 "uuid": "c381afeb-912b-5550-92b3-7e5ff1ffc3aa", 00:14:07.220 "is_configured": true, 00:14:07.220 "data_offset": 2048, 00:14:07.220 "data_size": 63488 00:14:07.220 }, 00:14:07.220 { 00:14:07.220 "name": "BaseBdev2", 00:14:07.220 "uuid": "cfeb5ab8-3a7a-5d00-a5a5-ae3d92baeab4", 00:14:07.220 "is_configured": true, 00:14:07.220 "data_offset": 2048, 00:14:07.220 "data_size": 63488 00:14:07.220 } 00:14:07.220 ] 00:14:07.220 }' 00:14:07.220 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:07.220 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.480 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:07.480 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.480 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.480 [2024-10-09 03:16:50.627368] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:07.480 [2024-10-09 03:16:50.627504] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:07.480 00:14:07.480 Latency(us) 00:14:07.480 [2024-10-09T03:16:50.783Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:07.480 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:14:07.480 raid_bdev1 : 7.62 97.43 292.29 0.00 0.00 14081.70 295.13 113099.68 00:14:07.480 [2024-10-09T03:16:50.783Z] =================================================================================================================== 00:14:07.480 [2024-10-09T03:16:50.783Z] Total : 97.43 292.29 0.00 0.00 14081.70 295.13 113099.68 00:14:07.480 [2024-10-09 03:16:50.664634] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:07.480 [2024-10-09 03:16:50.664725] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:07.480 [2024-10-09 03:16:50.664832] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:07.480 [2024-10-09 03:16:50.664900] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:07.480 { 00:14:07.480 "results": [ 00:14:07.480 { 00:14:07.480 "job": "raid_bdev1", 00:14:07.480 "core_mask": "0x1", 00:14:07.480 "workload": "randrw", 00:14:07.480 "percentage": 50, 00:14:07.480 "status": "finished", 00:14:07.480 "queue_depth": 2, 00:14:07.480 "io_size": 3145728, 00:14:07.480 "runtime": 7.615844, 00:14:07.480 "iops": 97.42846623433988, 00:14:07.480 "mibps": 292.2853987030196, 00:14:07.480 "io_failed": 0, 00:14:07.480 "io_timeout": 0, 00:14:07.480 "avg_latency_us": 14081.70246824939, 00:14:07.480 "min_latency_us": 295.12663755458516, 00:14:07.480 "max_latency_us": 113099.68209606987 00:14:07.480 } 00:14:07.480 ], 00:14:07.480 "core_count": 1 00:14:07.480 } 00:14:07.480 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.480 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:07.480 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.480 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:07.480 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:14:07.480 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.480 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:07.480 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:07.480 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:14:07.480 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:14:07.480 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:07.480 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:14:07.480 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:07.480 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:07.480 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:07.480 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:07.480 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:07.480 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:07.480 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:14:07.740 /dev/nbd0 00:14:07.740 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:07.740 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:07.740 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:07.740 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:14:07.740 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:07.740 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:07.740 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:07.740 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:14:07.740 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:07.740 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:07.740 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:07.740 1+0 records in 00:14:07.740 1+0 records out 00:14:07.740 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000473209 s, 8.7 MB/s 00:14:07.740 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.740 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:14:07.740 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:07.740 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:07.740 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:14:07.740 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:07.740 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:07.740 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:14:07.740 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:14:07.740 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:14:07.740 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:07.740 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:14:07.740 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:07.740 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:14:07.740 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:07.740 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:14:07.741 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:07.741 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:07.741 03:16:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:14:08.002 /dev/nbd1 00:14:08.002 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:08.002 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:08.002 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:08.002 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:14:08.002 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:08.002 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:08.002 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:08.002 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:14:08.002 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:08.002 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:08.002 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:08.002 1+0 records in 00:14:08.002 1+0 records out 00:14:08.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000433246 s, 9.5 MB/s 00:14:08.002 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:08.002 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:14:08.002 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:08.002 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:08.002 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:14:08.002 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:08.002 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:08.002 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:08.262 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:14:08.262 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:08.262 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:14:08.262 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:08.262 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:08.262 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.262 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:08.522 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:08.522 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:08.522 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:08.522 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.522 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.522 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:08.522 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:08.522 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.522 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:08.522 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:08.522 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:08.522 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:08.522 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:14:08.522 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:08.522 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:08.522 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:08.522 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:08.522 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:08.522 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:08.522 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:08.522 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:08.782 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:14:08.782 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:14:08.782 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:08.782 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:08.782 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.782 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.782 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.782 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:08.782 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.782 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.782 [2024-10-09 03:16:51.847727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:08.782 [2024-10-09 03:16:51.847887] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:08.782 [2024-10-09 03:16:51.847930] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:14:08.782 [2024-10-09 03:16:51.847964] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:08.782 [2024-10-09 03:16:51.850592] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:08.782 [2024-10-09 03:16:51.850674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:08.782 [2024-10-09 03:16:51.850803] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:08.782 [2024-10-09 03:16:51.850889] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:08.782 [2024-10-09 03:16:51.851104] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:08.782 spare 00:14:08.782 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.783 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:08.783 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.783 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.783 [2024-10-09 03:16:51.951094] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:08.783 [2024-10-09 03:16:51.951265] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:08.783 [2024-10-09 03:16:51.951728] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:14:08.783 [2024-10-09 03:16:51.952042] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:08.783 [2024-10-09 03:16:51.952103] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:08.783 [2024-10-09 03:16:51.952426] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:08.783 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.783 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:08.783 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:08.783 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:08.783 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:08.783 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:08.783 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:08.783 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:08.783 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:08.783 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:08.783 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:08.783 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:08.783 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:08.783 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:08.783 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:08.783 03:16:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:08.783 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:08.783 "name": "raid_bdev1", 00:14:08.783 "uuid": "9d6d4b06-efe3-4a1d-9889-763bf095eb4d", 00:14:08.783 "strip_size_kb": 0, 00:14:08.783 "state": "online", 00:14:08.783 "raid_level": "raid1", 00:14:08.783 "superblock": true, 00:14:08.783 "num_base_bdevs": 2, 00:14:08.783 "num_base_bdevs_discovered": 2, 00:14:08.783 "num_base_bdevs_operational": 2, 00:14:08.783 "base_bdevs_list": [ 00:14:08.783 { 00:14:08.783 "name": "spare", 00:14:08.783 "uuid": "c381afeb-912b-5550-92b3-7e5ff1ffc3aa", 00:14:08.783 "is_configured": true, 00:14:08.783 "data_offset": 2048, 00:14:08.783 "data_size": 63488 00:14:08.783 }, 00:14:08.783 { 00:14:08.783 "name": "BaseBdev2", 00:14:08.783 "uuid": "cfeb5ab8-3a7a-5d00-a5a5-ae3d92baeab4", 00:14:08.783 "is_configured": true, 00:14:08.783 "data_offset": 2048, 00:14:08.783 "data_size": 63488 00:14:08.783 } 00:14:08.783 ] 00:14:08.783 }' 00:14:08.783 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:08.783 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:09.351 "name": "raid_bdev1", 00:14:09.351 "uuid": "9d6d4b06-efe3-4a1d-9889-763bf095eb4d", 00:14:09.351 "strip_size_kb": 0, 00:14:09.351 "state": "online", 00:14:09.351 "raid_level": "raid1", 00:14:09.351 "superblock": true, 00:14:09.351 "num_base_bdevs": 2, 00:14:09.351 "num_base_bdevs_discovered": 2, 00:14:09.351 "num_base_bdevs_operational": 2, 00:14:09.351 "base_bdevs_list": [ 00:14:09.351 { 00:14:09.351 "name": "spare", 00:14:09.351 "uuid": "c381afeb-912b-5550-92b3-7e5ff1ffc3aa", 00:14:09.351 "is_configured": true, 00:14:09.351 "data_offset": 2048, 00:14:09.351 "data_size": 63488 00:14:09.351 }, 00:14:09.351 { 00:14:09.351 "name": "BaseBdev2", 00:14:09.351 "uuid": "cfeb5ab8-3a7a-5d00-a5a5-ae3d92baeab4", 00:14:09.351 "is_configured": true, 00:14:09.351 "data_offset": 2048, 00:14:09.351 "data_size": 63488 00:14:09.351 } 00:14:09.351 ] 00:14:09.351 }' 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.351 [2024-10-09 03:16:52.583375] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.351 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:09.351 "name": "raid_bdev1", 00:14:09.351 "uuid": "9d6d4b06-efe3-4a1d-9889-763bf095eb4d", 00:14:09.351 "strip_size_kb": 0, 00:14:09.351 "state": "online", 00:14:09.351 "raid_level": "raid1", 00:14:09.351 "superblock": true, 00:14:09.351 "num_base_bdevs": 2, 00:14:09.351 "num_base_bdevs_discovered": 1, 00:14:09.351 "num_base_bdevs_operational": 1, 00:14:09.351 "base_bdevs_list": [ 00:14:09.351 { 00:14:09.351 "name": null, 00:14:09.351 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.351 "is_configured": false, 00:14:09.351 "data_offset": 0, 00:14:09.351 "data_size": 63488 00:14:09.351 }, 00:14:09.351 { 00:14:09.351 "name": "BaseBdev2", 00:14:09.351 "uuid": "cfeb5ab8-3a7a-5d00-a5a5-ae3d92baeab4", 00:14:09.351 "is_configured": true, 00:14:09.352 "data_offset": 2048, 00:14:09.352 "data_size": 63488 00:14:09.352 } 00:14:09.352 ] 00:14:09.352 }' 00:14:09.352 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:09.352 03:16:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.921 03:16:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:09.921 03:16:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.921 03:16:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:09.921 [2024-10-09 03:16:53.014696] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:09.921 [2024-10-09 03:16:53.015067] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:09.921 [2024-10-09 03:16:53.015133] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:09.921 [2024-10-09 03:16:53.015202] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:09.921 [2024-10-09 03:16:53.031872] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:14:09.921 03:16:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.921 03:16:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:09.921 [2024-10-09 03:16:53.034092] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:10.860 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:10.860 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:10.860 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:10.860 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:10.860 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:10.860 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:10.860 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.860 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:10.860 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:10.860 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.860 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:10.860 "name": "raid_bdev1", 00:14:10.860 "uuid": "9d6d4b06-efe3-4a1d-9889-763bf095eb4d", 00:14:10.860 "strip_size_kb": 0, 00:14:10.860 "state": "online", 00:14:10.860 "raid_level": "raid1", 00:14:10.860 "superblock": true, 00:14:10.860 "num_base_bdevs": 2, 00:14:10.860 "num_base_bdevs_discovered": 2, 00:14:10.860 "num_base_bdevs_operational": 2, 00:14:10.860 "process": { 00:14:10.860 "type": "rebuild", 00:14:10.860 "target": "spare", 00:14:10.860 "progress": { 00:14:10.860 "blocks": 20480, 00:14:10.860 "percent": 32 00:14:10.860 } 00:14:10.860 }, 00:14:10.860 "base_bdevs_list": [ 00:14:10.860 { 00:14:10.860 "name": "spare", 00:14:10.860 "uuid": "c381afeb-912b-5550-92b3-7e5ff1ffc3aa", 00:14:10.860 "is_configured": true, 00:14:10.860 "data_offset": 2048, 00:14:10.860 "data_size": 63488 00:14:10.860 }, 00:14:10.860 { 00:14:10.860 "name": "BaseBdev2", 00:14:10.860 "uuid": "cfeb5ab8-3a7a-5d00-a5a5-ae3d92baeab4", 00:14:10.860 "is_configured": true, 00:14:10.860 "data_offset": 2048, 00:14:10.860 "data_size": 63488 00:14:10.860 } 00:14:10.860 ] 00:14:10.860 }' 00:14:10.860 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:10.860 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:10.860 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:11.120 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:11.120 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:11.120 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.120 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.120 [2024-10-09 03:16:54.198392] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:11.120 [2024-10-09 03:16:54.243903] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:11.120 [2024-10-09 03:16:54.244056] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:11.120 [2024-10-09 03:16:54.244097] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:11.120 [2024-10-09 03:16:54.244119] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:11.120 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.120 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:11.120 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:11.120 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:11.120 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:11.120 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:11.120 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:11.120 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:11.120 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:11.120 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:11.120 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:11.120 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:11.120 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:11.120 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.120 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.120 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.120 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:11.120 "name": "raid_bdev1", 00:14:11.120 "uuid": "9d6d4b06-efe3-4a1d-9889-763bf095eb4d", 00:14:11.120 "strip_size_kb": 0, 00:14:11.120 "state": "online", 00:14:11.120 "raid_level": "raid1", 00:14:11.120 "superblock": true, 00:14:11.120 "num_base_bdevs": 2, 00:14:11.120 "num_base_bdevs_discovered": 1, 00:14:11.120 "num_base_bdevs_operational": 1, 00:14:11.120 "base_bdevs_list": [ 00:14:11.120 { 00:14:11.120 "name": null, 00:14:11.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:11.120 "is_configured": false, 00:14:11.120 "data_offset": 0, 00:14:11.120 "data_size": 63488 00:14:11.120 }, 00:14:11.120 { 00:14:11.120 "name": "BaseBdev2", 00:14:11.120 "uuid": "cfeb5ab8-3a7a-5d00-a5a5-ae3d92baeab4", 00:14:11.120 "is_configured": true, 00:14:11.120 "data_offset": 2048, 00:14:11.120 "data_size": 63488 00:14:11.120 } 00:14:11.120 ] 00:14:11.120 }' 00:14:11.120 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:11.120 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.690 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:11.690 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.690 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:11.690 [2024-10-09 03:16:54.718731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:11.690 [2024-10-09 03:16:54.718931] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:11.690 [2024-10-09 03:16:54.718970] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:14:11.690 [2024-10-09 03:16:54.718980] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:11.690 [2024-10-09 03:16:54.719582] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:11.690 [2024-10-09 03:16:54.719601] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:11.690 [2024-10-09 03:16:54.719718] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:11.690 [2024-10-09 03:16:54.719731] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:11.690 [2024-10-09 03:16:54.719745] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:11.690 [2024-10-09 03:16:54.719778] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:11.690 [2024-10-09 03:16:54.736884] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:14:11.690 spare 00:14:11.690 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.690 03:16:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:11.690 [2024-10-09 03:16:54.739142] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:12.629 03:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:12.629 03:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:12.629 03:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:12.629 03:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:12.629 03:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:12.629 03:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.629 03:16:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.629 03:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.629 03:16:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.629 03:16:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.629 03:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:12.629 "name": "raid_bdev1", 00:14:12.629 "uuid": "9d6d4b06-efe3-4a1d-9889-763bf095eb4d", 00:14:12.629 "strip_size_kb": 0, 00:14:12.629 "state": "online", 00:14:12.629 "raid_level": "raid1", 00:14:12.629 "superblock": true, 00:14:12.629 "num_base_bdevs": 2, 00:14:12.629 "num_base_bdevs_discovered": 2, 00:14:12.629 "num_base_bdevs_operational": 2, 00:14:12.629 "process": { 00:14:12.629 "type": "rebuild", 00:14:12.629 "target": "spare", 00:14:12.629 "progress": { 00:14:12.629 "blocks": 20480, 00:14:12.629 "percent": 32 00:14:12.629 } 00:14:12.629 }, 00:14:12.629 "base_bdevs_list": [ 00:14:12.629 { 00:14:12.629 "name": "spare", 00:14:12.629 "uuid": "c381afeb-912b-5550-92b3-7e5ff1ffc3aa", 00:14:12.629 "is_configured": true, 00:14:12.629 "data_offset": 2048, 00:14:12.629 "data_size": 63488 00:14:12.629 }, 00:14:12.629 { 00:14:12.629 "name": "BaseBdev2", 00:14:12.629 "uuid": "cfeb5ab8-3a7a-5d00-a5a5-ae3d92baeab4", 00:14:12.629 "is_configured": true, 00:14:12.629 "data_offset": 2048, 00:14:12.629 "data_size": 63488 00:14:12.629 } 00:14:12.629 ] 00:14:12.629 }' 00:14:12.629 03:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:12.629 03:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:12.629 03:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:12.629 03:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:12.629 03:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:12.629 03:16:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.629 03:16:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.629 [2024-10-09 03:16:55.899042] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:12.888 [2024-10-09 03:16:55.949602] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:12.888 [2024-10-09 03:16:55.949765] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:12.888 [2024-10-09 03:16:55.949783] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:12.888 [2024-10-09 03:16:55.949794] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:12.888 03:16:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.888 03:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:12.888 03:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:12.888 03:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:12.888 03:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:12.888 03:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:12.888 03:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:12.889 03:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:12.889 03:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:12.889 03:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:12.889 03:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:12.889 03:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:12.889 03:16:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:12.889 03:16:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.889 03:16:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:12.889 03:16:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.889 03:16:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:12.889 "name": "raid_bdev1", 00:14:12.889 "uuid": "9d6d4b06-efe3-4a1d-9889-763bf095eb4d", 00:14:12.889 "strip_size_kb": 0, 00:14:12.889 "state": "online", 00:14:12.889 "raid_level": "raid1", 00:14:12.889 "superblock": true, 00:14:12.889 "num_base_bdevs": 2, 00:14:12.889 "num_base_bdevs_discovered": 1, 00:14:12.889 "num_base_bdevs_operational": 1, 00:14:12.889 "base_bdevs_list": [ 00:14:12.889 { 00:14:12.889 "name": null, 00:14:12.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.889 "is_configured": false, 00:14:12.889 "data_offset": 0, 00:14:12.889 "data_size": 63488 00:14:12.889 }, 00:14:12.889 { 00:14:12.889 "name": "BaseBdev2", 00:14:12.889 "uuid": "cfeb5ab8-3a7a-5d00-a5a5-ae3d92baeab4", 00:14:12.889 "is_configured": true, 00:14:12.889 "data_offset": 2048, 00:14:12.889 "data_size": 63488 00:14:12.889 } 00:14:12.889 ] 00:14:12.889 }' 00:14:12.889 03:16:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:12.889 03:16:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.148 03:16:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:13.148 03:16:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:13.148 03:16:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:13.148 03:16:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:13.148 03:16:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:13.148 03:16:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.148 03:16:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.148 03:16:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.148 03:16:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.148 03:16:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.409 03:16:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:13.409 "name": "raid_bdev1", 00:14:13.409 "uuid": "9d6d4b06-efe3-4a1d-9889-763bf095eb4d", 00:14:13.409 "strip_size_kb": 0, 00:14:13.409 "state": "online", 00:14:13.409 "raid_level": "raid1", 00:14:13.409 "superblock": true, 00:14:13.409 "num_base_bdevs": 2, 00:14:13.409 "num_base_bdevs_discovered": 1, 00:14:13.409 "num_base_bdevs_operational": 1, 00:14:13.409 "base_bdevs_list": [ 00:14:13.409 { 00:14:13.409 "name": null, 00:14:13.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.409 "is_configured": false, 00:14:13.409 "data_offset": 0, 00:14:13.409 "data_size": 63488 00:14:13.409 }, 00:14:13.409 { 00:14:13.409 "name": "BaseBdev2", 00:14:13.409 "uuid": "cfeb5ab8-3a7a-5d00-a5a5-ae3d92baeab4", 00:14:13.409 "is_configured": true, 00:14:13.409 "data_offset": 2048, 00:14:13.409 "data_size": 63488 00:14:13.409 } 00:14:13.409 ] 00:14:13.409 }' 00:14:13.409 03:16:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:13.409 03:16:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:13.409 03:16:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:13.409 03:16:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:13.409 03:16:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:13.409 03:16:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.409 03:16:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.409 03:16:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.409 03:16:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:13.409 03:16:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.409 03:16:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:13.409 [2024-10-09 03:16:56.577735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:13.409 [2024-10-09 03:16:56.577926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:13.409 [2024-10-09 03:16:56.577973] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:14:13.409 [2024-10-09 03:16:56.578009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:13.409 [2024-10-09 03:16:56.578574] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:13.409 [2024-10-09 03:16:56.578648] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:13.409 [2024-10-09 03:16:56.578777] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:13.409 [2024-10-09 03:16:56.578823] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:13.409 [2024-10-09 03:16:56.578863] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:13.409 [2024-10-09 03:16:56.578879] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:13.409 BaseBdev1 00:14:13.409 03:16:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.409 03:16:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:14.346 03:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:14.346 03:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.346 03:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.346 03:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:14.346 03:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:14.346 03:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:14.346 03:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.346 03:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.346 03:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.346 03:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.346 03:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.346 03:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.346 03:16:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.346 03:16:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.346 03:16:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.347 03:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.347 "name": "raid_bdev1", 00:14:14.347 "uuid": "9d6d4b06-efe3-4a1d-9889-763bf095eb4d", 00:14:14.347 "strip_size_kb": 0, 00:14:14.347 "state": "online", 00:14:14.347 "raid_level": "raid1", 00:14:14.347 "superblock": true, 00:14:14.347 "num_base_bdevs": 2, 00:14:14.347 "num_base_bdevs_discovered": 1, 00:14:14.347 "num_base_bdevs_operational": 1, 00:14:14.347 "base_bdevs_list": [ 00:14:14.347 { 00:14:14.347 "name": null, 00:14:14.347 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.347 "is_configured": false, 00:14:14.347 "data_offset": 0, 00:14:14.347 "data_size": 63488 00:14:14.347 }, 00:14:14.347 { 00:14:14.347 "name": "BaseBdev2", 00:14:14.347 "uuid": "cfeb5ab8-3a7a-5d00-a5a5-ae3d92baeab4", 00:14:14.347 "is_configured": true, 00:14:14.347 "data_offset": 2048, 00:14:14.347 "data_size": 63488 00:14:14.347 } 00:14:14.347 ] 00:14:14.347 }' 00:14:14.347 03:16:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.347 03:16:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.916 03:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:14.916 03:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.916 03:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:14.916 03:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:14.916 03:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.916 03:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.916 03:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.916 03:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.916 03:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.916 03:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.916 03:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.917 "name": "raid_bdev1", 00:14:14.917 "uuid": "9d6d4b06-efe3-4a1d-9889-763bf095eb4d", 00:14:14.917 "strip_size_kb": 0, 00:14:14.917 "state": "online", 00:14:14.917 "raid_level": "raid1", 00:14:14.917 "superblock": true, 00:14:14.917 "num_base_bdevs": 2, 00:14:14.917 "num_base_bdevs_discovered": 1, 00:14:14.917 "num_base_bdevs_operational": 1, 00:14:14.917 "base_bdevs_list": [ 00:14:14.917 { 00:14:14.917 "name": null, 00:14:14.917 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.917 "is_configured": false, 00:14:14.917 "data_offset": 0, 00:14:14.917 "data_size": 63488 00:14:14.917 }, 00:14:14.917 { 00:14:14.917 "name": "BaseBdev2", 00:14:14.917 "uuid": "cfeb5ab8-3a7a-5d00-a5a5-ae3d92baeab4", 00:14:14.917 "is_configured": true, 00:14:14.917 "data_offset": 2048, 00:14:14.917 "data_size": 63488 00:14:14.917 } 00:14:14.917 ] 00:14:14.917 }' 00:14:14.917 03:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.917 03:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:14.917 03:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.917 03:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:14.917 03:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:14.917 03:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:14:14.917 03:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:14.917 03:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:14.917 03:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:14.917 03:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:14.917 03:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:14.917 03:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:14.917 03:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.917 03:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:14.917 [2024-10-09 03:16:58.099531] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:14.917 [2024-10-09 03:16:58.099862] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:14.917 [2024-10-09 03:16:58.099919] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:14.917 request: 00:14:14.917 { 00:14:14.917 "base_bdev": "BaseBdev1", 00:14:14.917 "raid_bdev": "raid_bdev1", 00:14:14.917 "method": "bdev_raid_add_base_bdev", 00:14:14.917 "req_id": 1 00:14:14.917 } 00:14:14.917 Got JSON-RPC error response 00:14:14.917 response: 00:14:14.917 { 00:14:14.917 "code": -22, 00:14:14.917 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:14.917 } 00:14:14.917 03:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:14.917 03:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:14:14.917 03:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:14.917 03:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:14.917 03:16:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:14.917 03:16:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:15.856 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:15.856 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:15.856 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:15.856 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:15.856 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:15.856 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:15.856 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:15.856 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:15.856 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:15.856 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:15.856 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.856 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.856 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.856 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:15.856 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.116 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:16.116 "name": "raid_bdev1", 00:14:16.116 "uuid": "9d6d4b06-efe3-4a1d-9889-763bf095eb4d", 00:14:16.116 "strip_size_kb": 0, 00:14:16.116 "state": "online", 00:14:16.116 "raid_level": "raid1", 00:14:16.116 "superblock": true, 00:14:16.116 "num_base_bdevs": 2, 00:14:16.116 "num_base_bdevs_discovered": 1, 00:14:16.116 "num_base_bdevs_operational": 1, 00:14:16.116 "base_bdevs_list": [ 00:14:16.116 { 00:14:16.116 "name": null, 00:14:16.116 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.116 "is_configured": false, 00:14:16.116 "data_offset": 0, 00:14:16.116 "data_size": 63488 00:14:16.116 }, 00:14:16.116 { 00:14:16.116 "name": "BaseBdev2", 00:14:16.116 "uuid": "cfeb5ab8-3a7a-5d00-a5a5-ae3d92baeab4", 00:14:16.116 "is_configured": true, 00:14:16.116 "data_offset": 2048, 00:14:16.116 "data_size": 63488 00:14:16.116 } 00:14:16.116 ] 00:14:16.116 }' 00:14:16.116 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:16.116 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.376 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:16.376 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.376 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:16.376 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:16.376 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.376 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.376 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.376 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.376 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:16.376 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.376 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.376 "name": "raid_bdev1", 00:14:16.376 "uuid": "9d6d4b06-efe3-4a1d-9889-763bf095eb4d", 00:14:16.376 "strip_size_kb": 0, 00:14:16.376 "state": "online", 00:14:16.376 "raid_level": "raid1", 00:14:16.376 "superblock": true, 00:14:16.376 "num_base_bdevs": 2, 00:14:16.376 "num_base_bdevs_discovered": 1, 00:14:16.376 "num_base_bdevs_operational": 1, 00:14:16.376 "base_bdevs_list": [ 00:14:16.376 { 00:14:16.376 "name": null, 00:14:16.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:16.376 "is_configured": false, 00:14:16.376 "data_offset": 0, 00:14:16.376 "data_size": 63488 00:14:16.376 }, 00:14:16.376 { 00:14:16.376 "name": "BaseBdev2", 00:14:16.376 "uuid": "cfeb5ab8-3a7a-5d00-a5a5-ae3d92baeab4", 00:14:16.376 "is_configured": true, 00:14:16.376 "data_offset": 2048, 00:14:16.376 "data_size": 63488 00:14:16.376 } 00:14:16.376 ] 00:14:16.376 }' 00:14:16.376 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.376 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:16.376 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.636 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:16.636 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 77103 00:14:16.636 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 77103 ']' 00:14:16.636 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 77103 00:14:16.636 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:14:16.636 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:16.636 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77103 00:14:16.636 killing process with pid 77103 00:14:16.636 Received shutdown signal, test time was about 16.706054 seconds 00:14:16.636 00:14:16.636 Latency(us) 00:14:16.636 [2024-10-09T03:16:59.939Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:16.636 [2024-10-09T03:16:59.939Z] =================================================================================================================== 00:14:16.636 [2024-10-09T03:16:59.939Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:14:16.636 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:16.636 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:16.636 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77103' 00:14:16.636 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 77103 00:14:16.636 [2024-10-09 03:16:59.715917] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:16.636 03:16:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 77103 00:14:16.636 [2024-10-09 03:16:59.716075] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:16.636 [2024-10-09 03:16:59.716140] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:16.636 [2024-10-09 03:16:59.716150] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:16.895 [2024-10-09 03:16:59.951965] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:18.276 03:17:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:14:18.276 ************************************ 00:14:18.276 END TEST raid_rebuild_test_sb_io 00:14:18.276 ************************************ 00:14:18.276 00:14:18.276 real 0m20.104s 00:14:18.276 user 0m26.063s 00:14:18.276 sys 0m2.118s 00:14:18.276 03:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:18.276 03:17:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:14:18.276 03:17:01 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:14:18.276 03:17:01 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:14:18.276 03:17:01 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:18.276 03:17:01 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:18.276 03:17:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:18.276 ************************************ 00:14:18.276 START TEST raid_rebuild_test 00:14:18.276 ************************************ 00:14:18.276 03:17:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false false true 00:14:18.276 03:17:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:18.276 03:17:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:18.276 03:17:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:18.276 03:17:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:18.276 03:17:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:18.276 03:17:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:18.276 03:17:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:18.277 03:17:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:18.277 03:17:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:18.277 03:17:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:18.277 03:17:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:18.277 03:17:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:18.277 03:17:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:18.277 03:17:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:18.277 03:17:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:18.277 03:17:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:18.277 03:17:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:18.277 03:17:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:18.277 03:17:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:18.277 03:17:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:18.277 03:17:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:18.277 03:17:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:18.277 03:17:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:18.277 03:17:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:18.277 03:17:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:18.277 03:17:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:18.277 03:17:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:18.277 03:17:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:18.277 03:17:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:18.277 03:17:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77786 00:14:18.277 03:17:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:18.277 03:17:01 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77786 00:14:18.277 03:17:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 77786 ']' 00:14:18.277 03:17:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.277 03:17:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:18.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.277 03:17:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.277 03:17:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:18.277 03:17:01 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:18.277 [2024-10-09 03:17:01.487238] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:14:18.277 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:18.277 Zero copy mechanism will not be used. 00:14:18.277 [2024-10-09 03:17:01.487788] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77786 ] 00:14:18.537 [2024-10-09 03:17:01.651030] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.796 [2024-10-09 03:17:01.893207] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.056 [2024-10-09 03:17:02.121971] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.056 [2024-10-09 03:17:02.122012] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:19.056 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:19.056 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:14:19.056 03:17:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:19.056 03:17:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:19.056 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.056 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.056 BaseBdev1_malloc 00:14:19.056 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.056 03:17:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:19.056 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.056 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.056 [2024-10-09 03:17:02.354285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:19.056 [2024-10-09 03:17:02.354438] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.056 [2024-10-09 03:17:02.354482] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:19.056 [2024-10-09 03:17:02.354517] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.056 [2024-10-09 03:17:02.356854] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.056 [2024-10-09 03:17:02.356931] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:19.056 BaseBdev1 00:14:19.316 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.316 03:17:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:19.316 03:17:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:19.316 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.316 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.316 BaseBdev2_malloc 00:14:19.316 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.316 03:17:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:19.316 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.316 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.316 [2024-10-09 03:17:02.438071] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:19.316 [2024-10-09 03:17:02.438179] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.316 [2024-10-09 03:17:02.438212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:19.316 [2024-10-09 03:17:02.438243] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.317 [2024-10-09 03:17:02.440478] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.317 [2024-10-09 03:17:02.440550] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:19.317 BaseBdev2 00:14:19.317 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.317 03:17:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:19.317 03:17:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:19.317 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.317 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.317 BaseBdev3_malloc 00:14:19.317 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.317 03:17:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:19.317 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.317 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.317 [2024-10-09 03:17:02.498952] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:19.317 [2024-10-09 03:17:02.499078] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.317 [2024-10-09 03:17:02.499116] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:19.317 [2024-10-09 03:17:02.499147] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.317 [2024-10-09 03:17:02.501407] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.317 BaseBdev3 00:14:19.317 [2024-10-09 03:17:02.501488] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:19.317 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.317 03:17:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:19.317 03:17:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:19.317 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.317 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.317 BaseBdev4_malloc 00:14:19.317 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.317 03:17:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:19.317 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.317 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.317 [2024-10-09 03:17:02.560273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:19.317 [2024-10-09 03:17:02.560344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.317 [2024-10-09 03:17:02.560365] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:19.317 [2024-10-09 03:17:02.560377] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.317 [2024-10-09 03:17:02.562674] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.317 [2024-10-09 03:17:02.562715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:19.317 BaseBdev4 00:14:19.317 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.317 03:17:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:19.317 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.317 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.317 spare_malloc 00:14:19.317 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.317 03:17:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:19.317 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.317 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.577 spare_delay 00:14:19.577 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.577 03:17:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:19.577 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.577 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.577 [2024-10-09 03:17:02.633048] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:19.577 [2024-10-09 03:17:02.633109] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:19.577 [2024-10-09 03:17:02.633126] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:19.577 [2024-10-09 03:17:02.633137] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:19.577 [2024-10-09 03:17:02.635350] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:19.577 [2024-10-09 03:17:02.635388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:19.577 spare 00:14:19.577 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.577 03:17:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:19.577 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.577 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.577 [2024-10-09 03:17:02.645090] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:19.577 [2024-10-09 03:17:02.647106] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:19.577 [2024-10-09 03:17:02.647215] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:19.577 [2024-10-09 03:17:02.647291] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:19.577 [2024-10-09 03:17:02.647400] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:19.577 [2024-10-09 03:17:02.647440] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:19.577 [2024-10-09 03:17:02.647698] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:19.577 [2024-10-09 03:17:02.647913] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:19.577 [2024-10-09 03:17:02.647958] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:19.577 [2024-10-09 03:17:02.648131] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.577 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.577 03:17:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:19.577 03:17:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:19.577 03:17:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:19.577 03:17:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:19.577 03:17:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:19.577 03:17:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:19.577 03:17:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.577 03:17:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.577 03:17:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.577 03:17:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.577 03:17:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.577 03:17:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.577 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.577 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.577 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.577 03:17:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.577 "name": "raid_bdev1", 00:14:19.577 "uuid": "bdf4a304-2d4c-44c2-a7d4-b32dbb44a99d", 00:14:19.577 "strip_size_kb": 0, 00:14:19.577 "state": "online", 00:14:19.577 "raid_level": "raid1", 00:14:19.577 "superblock": false, 00:14:19.577 "num_base_bdevs": 4, 00:14:19.577 "num_base_bdevs_discovered": 4, 00:14:19.577 "num_base_bdevs_operational": 4, 00:14:19.577 "base_bdevs_list": [ 00:14:19.577 { 00:14:19.577 "name": "BaseBdev1", 00:14:19.577 "uuid": "127e2ce0-fbc1-583c-9405-f96d23781b45", 00:14:19.577 "is_configured": true, 00:14:19.577 "data_offset": 0, 00:14:19.577 "data_size": 65536 00:14:19.577 }, 00:14:19.577 { 00:14:19.577 "name": "BaseBdev2", 00:14:19.577 "uuid": "0177a46a-e687-5a96-b5b6-0feddf9e7b43", 00:14:19.577 "is_configured": true, 00:14:19.577 "data_offset": 0, 00:14:19.577 "data_size": 65536 00:14:19.577 }, 00:14:19.577 { 00:14:19.577 "name": "BaseBdev3", 00:14:19.577 "uuid": "3e5ddf72-ca35-5978-abf3-dda59f5204f8", 00:14:19.577 "is_configured": true, 00:14:19.577 "data_offset": 0, 00:14:19.577 "data_size": 65536 00:14:19.577 }, 00:14:19.577 { 00:14:19.577 "name": "BaseBdev4", 00:14:19.577 "uuid": "bceea1fc-4e7f-589f-95c2-137b58fa6332", 00:14:19.577 "is_configured": true, 00:14:19.577 "data_offset": 0, 00:14:19.577 "data_size": 65536 00:14:19.577 } 00:14:19.577 ] 00:14:19.577 }' 00:14:19.577 03:17:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.577 03:17:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.837 03:17:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:19.837 03:17:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.837 03:17:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.837 03:17:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:19.837 [2024-10-09 03:17:03.072879] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:19.837 03:17:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.837 03:17:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:19.837 03:17:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.837 03:17:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:19.837 03:17:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.837 03:17:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.097 03:17:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.097 03:17:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:20.097 03:17:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:20.097 03:17:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:20.097 03:17:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:20.097 03:17:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:20.097 03:17:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:20.097 03:17:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:20.097 03:17:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:20.097 03:17:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:20.097 03:17:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:20.097 03:17:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:20.097 03:17:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:20.097 03:17:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:20.097 03:17:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:20.097 [2024-10-09 03:17:03.332165] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:20.097 /dev/nbd0 00:14:20.097 03:17:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:20.097 03:17:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:20.097 03:17:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:20.097 03:17:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:20.097 03:17:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:20.097 03:17:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:20.097 03:17:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:20.097 03:17:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:20.097 03:17:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:20.097 03:17:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:20.097 03:17:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:20.097 1+0 records in 00:14:20.097 1+0 records out 00:14:20.097 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000535923 s, 7.6 MB/s 00:14:20.098 03:17:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.098 03:17:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:20.098 03:17:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.098 03:17:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:20.098 03:17:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:20.098 03:17:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:20.357 03:17:03 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:20.357 03:17:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:20.357 03:17:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:20.357 03:17:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:25.634 65536+0 records in 00:14:25.634 65536+0 records out 00:14:25.634 33554432 bytes (34 MB, 32 MiB) copied, 5.5163 s, 6.1 MB/s 00:14:25.634 03:17:08 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:25.634 03:17:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:25.634 03:17:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:25.634 03:17:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:25.634 03:17:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:25.634 03:17:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:25.634 03:17:08 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:25.893 [2024-10-09 03:17:09.107686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:25.893 03:17:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:25.893 03:17:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:25.893 03:17:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:25.893 03:17:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:25.893 03:17:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:25.893 03:17:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:25.893 03:17:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:25.893 03:17:09 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:25.893 03:17:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:25.893 03:17:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.893 03:17:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.893 [2024-10-09 03:17:09.145675] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:25.893 03:17:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.893 03:17:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:25.893 03:17:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:25.893 03:17:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:25.893 03:17:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:25.893 03:17:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:25.893 03:17:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:25.893 03:17:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:25.893 03:17:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:25.893 03:17:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:25.893 03:17:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:25.893 03:17:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:25.893 03:17:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:25.893 03:17:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.893 03:17:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.893 03:17:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.152 03:17:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:26.152 "name": "raid_bdev1", 00:14:26.152 "uuid": "bdf4a304-2d4c-44c2-a7d4-b32dbb44a99d", 00:14:26.152 "strip_size_kb": 0, 00:14:26.152 "state": "online", 00:14:26.152 "raid_level": "raid1", 00:14:26.152 "superblock": false, 00:14:26.152 "num_base_bdevs": 4, 00:14:26.152 "num_base_bdevs_discovered": 3, 00:14:26.152 "num_base_bdevs_operational": 3, 00:14:26.152 "base_bdevs_list": [ 00:14:26.152 { 00:14:26.152 "name": null, 00:14:26.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.152 "is_configured": false, 00:14:26.152 "data_offset": 0, 00:14:26.152 "data_size": 65536 00:14:26.152 }, 00:14:26.152 { 00:14:26.152 "name": "BaseBdev2", 00:14:26.152 "uuid": "0177a46a-e687-5a96-b5b6-0feddf9e7b43", 00:14:26.152 "is_configured": true, 00:14:26.152 "data_offset": 0, 00:14:26.152 "data_size": 65536 00:14:26.152 }, 00:14:26.152 { 00:14:26.152 "name": "BaseBdev3", 00:14:26.152 "uuid": "3e5ddf72-ca35-5978-abf3-dda59f5204f8", 00:14:26.152 "is_configured": true, 00:14:26.152 "data_offset": 0, 00:14:26.152 "data_size": 65536 00:14:26.152 }, 00:14:26.152 { 00:14:26.152 "name": "BaseBdev4", 00:14:26.152 "uuid": "bceea1fc-4e7f-589f-95c2-137b58fa6332", 00:14:26.152 "is_configured": true, 00:14:26.152 "data_offset": 0, 00:14:26.152 "data_size": 65536 00:14:26.152 } 00:14:26.152 ] 00:14:26.152 }' 00:14:26.152 03:17:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:26.152 03:17:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.410 03:17:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:26.410 03:17:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:26.410 03:17:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.410 [2024-10-09 03:17:09.600981] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:26.411 [2024-10-09 03:17:09.615487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:14:26.411 03:17:09 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:26.411 03:17:09 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:26.411 [2024-10-09 03:17:09.617756] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:27.349 03:17:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:27.349 03:17:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:27.349 03:17:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:27.349 03:17:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:27.349 03:17:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:27.349 03:17:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.349 03:17:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.349 03:17:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.349 03:17:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.349 03:17:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.608 03:17:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:27.608 "name": "raid_bdev1", 00:14:27.609 "uuid": "bdf4a304-2d4c-44c2-a7d4-b32dbb44a99d", 00:14:27.609 "strip_size_kb": 0, 00:14:27.609 "state": "online", 00:14:27.609 "raid_level": "raid1", 00:14:27.609 "superblock": false, 00:14:27.609 "num_base_bdevs": 4, 00:14:27.609 "num_base_bdevs_discovered": 4, 00:14:27.609 "num_base_bdevs_operational": 4, 00:14:27.609 "process": { 00:14:27.609 "type": "rebuild", 00:14:27.609 "target": "spare", 00:14:27.609 "progress": { 00:14:27.609 "blocks": 20480, 00:14:27.609 "percent": 31 00:14:27.609 } 00:14:27.609 }, 00:14:27.609 "base_bdevs_list": [ 00:14:27.609 { 00:14:27.609 "name": "spare", 00:14:27.609 "uuid": "18d9d7a5-6dd0-580c-85b4-f15383d1cf0d", 00:14:27.609 "is_configured": true, 00:14:27.609 "data_offset": 0, 00:14:27.609 "data_size": 65536 00:14:27.609 }, 00:14:27.609 { 00:14:27.609 "name": "BaseBdev2", 00:14:27.609 "uuid": "0177a46a-e687-5a96-b5b6-0feddf9e7b43", 00:14:27.609 "is_configured": true, 00:14:27.609 "data_offset": 0, 00:14:27.609 "data_size": 65536 00:14:27.609 }, 00:14:27.609 { 00:14:27.609 "name": "BaseBdev3", 00:14:27.609 "uuid": "3e5ddf72-ca35-5978-abf3-dda59f5204f8", 00:14:27.609 "is_configured": true, 00:14:27.609 "data_offset": 0, 00:14:27.609 "data_size": 65536 00:14:27.609 }, 00:14:27.609 { 00:14:27.609 "name": "BaseBdev4", 00:14:27.609 "uuid": "bceea1fc-4e7f-589f-95c2-137b58fa6332", 00:14:27.609 "is_configured": true, 00:14:27.609 "data_offset": 0, 00:14:27.609 "data_size": 65536 00:14:27.609 } 00:14:27.609 ] 00:14:27.609 }' 00:14:27.609 03:17:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:27.609 03:17:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:27.609 03:17:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:27.609 03:17:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:27.609 03:17:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:27.609 03:17:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.609 03:17:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.609 [2024-10-09 03:17:10.773490] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:27.609 [2024-10-09 03:17:10.828291] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:27.609 [2024-10-09 03:17:10.828474] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:27.609 [2024-10-09 03:17:10.828496] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:27.609 [2024-10-09 03:17:10.828508] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:27.609 03:17:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.609 03:17:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:27.609 03:17:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:27.609 03:17:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:27.609 03:17:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:27.609 03:17:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:27.609 03:17:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:27.609 03:17:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:27.609 03:17:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:27.609 03:17:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:27.609 03:17:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:27.609 03:17:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:27.609 03:17:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:27.609 03:17:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.609 03:17:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.609 03:17:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.609 03:17:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:27.609 "name": "raid_bdev1", 00:14:27.609 "uuid": "bdf4a304-2d4c-44c2-a7d4-b32dbb44a99d", 00:14:27.609 "strip_size_kb": 0, 00:14:27.609 "state": "online", 00:14:27.609 "raid_level": "raid1", 00:14:27.609 "superblock": false, 00:14:27.609 "num_base_bdevs": 4, 00:14:27.609 "num_base_bdevs_discovered": 3, 00:14:27.609 "num_base_bdevs_operational": 3, 00:14:27.609 "base_bdevs_list": [ 00:14:27.609 { 00:14:27.609 "name": null, 00:14:27.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.609 "is_configured": false, 00:14:27.609 "data_offset": 0, 00:14:27.609 "data_size": 65536 00:14:27.609 }, 00:14:27.609 { 00:14:27.609 "name": "BaseBdev2", 00:14:27.609 "uuid": "0177a46a-e687-5a96-b5b6-0feddf9e7b43", 00:14:27.609 "is_configured": true, 00:14:27.609 "data_offset": 0, 00:14:27.609 "data_size": 65536 00:14:27.609 }, 00:14:27.609 { 00:14:27.609 "name": "BaseBdev3", 00:14:27.609 "uuid": "3e5ddf72-ca35-5978-abf3-dda59f5204f8", 00:14:27.609 "is_configured": true, 00:14:27.609 "data_offset": 0, 00:14:27.609 "data_size": 65536 00:14:27.609 }, 00:14:27.609 { 00:14:27.609 "name": "BaseBdev4", 00:14:27.609 "uuid": "bceea1fc-4e7f-589f-95c2-137b58fa6332", 00:14:27.609 "is_configured": true, 00:14:27.609 "data_offset": 0, 00:14:27.609 "data_size": 65536 00:14:27.609 } 00:14:27.609 ] 00:14:27.609 }' 00:14:27.609 03:17:10 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:27.609 03:17:10 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.179 03:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:28.179 03:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:28.179 03:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:28.179 03:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:28.179 03:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:28.179 03:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:28.179 03:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:28.179 03:17:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.179 03:17:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.179 03:17:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.179 03:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:28.179 "name": "raid_bdev1", 00:14:28.179 "uuid": "bdf4a304-2d4c-44c2-a7d4-b32dbb44a99d", 00:14:28.179 "strip_size_kb": 0, 00:14:28.179 "state": "online", 00:14:28.179 "raid_level": "raid1", 00:14:28.179 "superblock": false, 00:14:28.179 "num_base_bdevs": 4, 00:14:28.179 "num_base_bdevs_discovered": 3, 00:14:28.179 "num_base_bdevs_operational": 3, 00:14:28.179 "base_bdevs_list": [ 00:14:28.179 { 00:14:28.179 "name": null, 00:14:28.179 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.179 "is_configured": false, 00:14:28.179 "data_offset": 0, 00:14:28.179 "data_size": 65536 00:14:28.179 }, 00:14:28.179 { 00:14:28.179 "name": "BaseBdev2", 00:14:28.179 "uuid": "0177a46a-e687-5a96-b5b6-0feddf9e7b43", 00:14:28.179 "is_configured": true, 00:14:28.179 "data_offset": 0, 00:14:28.179 "data_size": 65536 00:14:28.179 }, 00:14:28.179 { 00:14:28.179 "name": "BaseBdev3", 00:14:28.179 "uuid": "3e5ddf72-ca35-5978-abf3-dda59f5204f8", 00:14:28.179 "is_configured": true, 00:14:28.179 "data_offset": 0, 00:14:28.179 "data_size": 65536 00:14:28.179 }, 00:14:28.179 { 00:14:28.179 "name": "BaseBdev4", 00:14:28.179 "uuid": "bceea1fc-4e7f-589f-95c2-137b58fa6332", 00:14:28.179 "is_configured": true, 00:14:28.179 "data_offset": 0, 00:14:28.179 "data_size": 65536 00:14:28.179 } 00:14:28.179 ] 00:14:28.179 }' 00:14:28.179 03:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:28.179 03:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:28.179 03:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:28.179 03:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:28.179 03:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:28.179 03:17:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:28.179 03:17:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.179 [2024-10-09 03:17:11.386016] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:28.179 [2024-10-09 03:17:11.401673] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:14:28.179 03:17:11 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:28.179 03:17:11 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:28.179 [2024-10-09 03:17:11.404077] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:29.155 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.155 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.155 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.155 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.155 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.155 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.155 03:17:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.155 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.155 03:17:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.155 03:17:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.155 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.155 "name": "raid_bdev1", 00:14:29.155 "uuid": "bdf4a304-2d4c-44c2-a7d4-b32dbb44a99d", 00:14:29.155 "strip_size_kb": 0, 00:14:29.155 "state": "online", 00:14:29.155 "raid_level": "raid1", 00:14:29.155 "superblock": false, 00:14:29.155 "num_base_bdevs": 4, 00:14:29.155 "num_base_bdevs_discovered": 4, 00:14:29.155 "num_base_bdevs_operational": 4, 00:14:29.155 "process": { 00:14:29.155 "type": "rebuild", 00:14:29.155 "target": "spare", 00:14:29.155 "progress": { 00:14:29.155 "blocks": 20480, 00:14:29.155 "percent": 31 00:14:29.155 } 00:14:29.155 }, 00:14:29.155 "base_bdevs_list": [ 00:14:29.155 { 00:14:29.155 "name": "spare", 00:14:29.155 "uuid": "18d9d7a5-6dd0-580c-85b4-f15383d1cf0d", 00:14:29.155 "is_configured": true, 00:14:29.155 "data_offset": 0, 00:14:29.155 "data_size": 65536 00:14:29.155 }, 00:14:29.155 { 00:14:29.155 "name": "BaseBdev2", 00:14:29.155 "uuid": "0177a46a-e687-5a96-b5b6-0feddf9e7b43", 00:14:29.155 "is_configured": true, 00:14:29.155 "data_offset": 0, 00:14:29.155 "data_size": 65536 00:14:29.155 }, 00:14:29.155 { 00:14:29.155 "name": "BaseBdev3", 00:14:29.155 "uuid": "3e5ddf72-ca35-5978-abf3-dda59f5204f8", 00:14:29.155 "is_configured": true, 00:14:29.155 "data_offset": 0, 00:14:29.155 "data_size": 65536 00:14:29.155 }, 00:14:29.155 { 00:14:29.155 "name": "BaseBdev4", 00:14:29.155 "uuid": "bceea1fc-4e7f-589f-95c2-137b58fa6332", 00:14:29.155 "is_configured": true, 00:14:29.155 "data_offset": 0, 00:14:29.155 "data_size": 65536 00:14:29.155 } 00:14:29.155 ] 00:14:29.155 }' 00:14:29.415 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.415 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.415 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.415 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.415 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:29.415 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:29.415 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:29.415 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:29.415 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:29.415 03:17:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.415 03:17:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.415 [2024-10-09 03:17:12.565555] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:29.415 [2024-10-09 03:17:12.614357] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:14:29.415 03:17:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.415 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:29.415 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:29.415 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.415 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.415 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.415 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.415 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.415 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.415 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.415 03:17:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.415 03:17:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.415 03:17:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.415 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.415 "name": "raid_bdev1", 00:14:29.415 "uuid": "bdf4a304-2d4c-44c2-a7d4-b32dbb44a99d", 00:14:29.415 "strip_size_kb": 0, 00:14:29.415 "state": "online", 00:14:29.415 "raid_level": "raid1", 00:14:29.415 "superblock": false, 00:14:29.415 "num_base_bdevs": 4, 00:14:29.415 "num_base_bdevs_discovered": 3, 00:14:29.415 "num_base_bdevs_operational": 3, 00:14:29.415 "process": { 00:14:29.415 "type": "rebuild", 00:14:29.415 "target": "spare", 00:14:29.415 "progress": { 00:14:29.415 "blocks": 24576, 00:14:29.415 "percent": 37 00:14:29.415 } 00:14:29.415 }, 00:14:29.415 "base_bdevs_list": [ 00:14:29.415 { 00:14:29.415 "name": "spare", 00:14:29.415 "uuid": "18d9d7a5-6dd0-580c-85b4-f15383d1cf0d", 00:14:29.415 "is_configured": true, 00:14:29.415 "data_offset": 0, 00:14:29.415 "data_size": 65536 00:14:29.415 }, 00:14:29.415 { 00:14:29.415 "name": null, 00:14:29.415 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.415 "is_configured": false, 00:14:29.415 "data_offset": 0, 00:14:29.415 "data_size": 65536 00:14:29.415 }, 00:14:29.415 { 00:14:29.415 "name": "BaseBdev3", 00:14:29.415 "uuid": "3e5ddf72-ca35-5978-abf3-dda59f5204f8", 00:14:29.415 "is_configured": true, 00:14:29.415 "data_offset": 0, 00:14:29.415 "data_size": 65536 00:14:29.415 }, 00:14:29.415 { 00:14:29.415 "name": "BaseBdev4", 00:14:29.415 "uuid": "bceea1fc-4e7f-589f-95c2-137b58fa6332", 00:14:29.415 "is_configured": true, 00:14:29.415 "data_offset": 0, 00:14:29.415 "data_size": 65536 00:14:29.415 } 00:14:29.415 ] 00:14:29.415 }' 00:14:29.415 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.675 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.675 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.675 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.675 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=459 00:14:29.675 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:29.675 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:29.675 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:29.675 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:29.675 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:29.675 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:29.675 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:29.675 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:29.675 03:17:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.675 03:17:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:29.675 03:17:12 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.675 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:29.675 "name": "raid_bdev1", 00:14:29.675 "uuid": "bdf4a304-2d4c-44c2-a7d4-b32dbb44a99d", 00:14:29.675 "strip_size_kb": 0, 00:14:29.675 "state": "online", 00:14:29.675 "raid_level": "raid1", 00:14:29.675 "superblock": false, 00:14:29.675 "num_base_bdevs": 4, 00:14:29.675 "num_base_bdevs_discovered": 3, 00:14:29.675 "num_base_bdevs_operational": 3, 00:14:29.675 "process": { 00:14:29.675 "type": "rebuild", 00:14:29.675 "target": "spare", 00:14:29.675 "progress": { 00:14:29.675 "blocks": 26624, 00:14:29.675 "percent": 40 00:14:29.675 } 00:14:29.675 }, 00:14:29.675 "base_bdevs_list": [ 00:14:29.675 { 00:14:29.675 "name": "spare", 00:14:29.675 "uuid": "18d9d7a5-6dd0-580c-85b4-f15383d1cf0d", 00:14:29.675 "is_configured": true, 00:14:29.675 "data_offset": 0, 00:14:29.675 "data_size": 65536 00:14:29.675 }, 00:14:29.675 { 00:14:29.675 "name": null, 00:14:29.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:29.675 "is_configured": false, 00:14:29.675 "data_offset": 0, 00:14:29.675 "data_size": 65536 00:14:29.675 }, 00:14:29.675 { 00:14:29.675 "name": "BaseBdev3", 00:14:29.675 "uuid": "3e5ddf72-ca35-5978-abf3-dda59f5204f8", 00:14:29.675 "is_configured": true, 00:14:29.675 "data_offset": 0, 00:14:29.675 "data_size": 65536 00:14:29.675 }, 00:14:29.675 { 00:14:29.675 "name": "BaseBdev4", 00:14:29.675 "uuid": "bceea1fc-4e7f-589f-95c2-137b58fa6332", 00:14:29.675 "is_configured": true, 00:14:29.675 "data_offset": 0, 00:14:29.675 "data_size": 65536 00:14:29.675 } 00:14:29.675 ] 00:14:29.675 }' 00:14:29.675 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:29.675 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:29.675 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:29.675 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:29.675 03:17:12 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:30.613 03:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:30.613 03:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:30.613 03:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:30.613 03:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:30.613 03:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:30.613 03:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:30.613 03:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:30.613 03:17:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.613 03:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:30.613 03:17:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.613 03:17:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.873 03:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:30.873 "name": "raid_bdev1", 00:14:30.873 "uuid": "bdf4a304-2d4c-44c2-a7d4-b32dbb44a99d", 00:14:30.873 "strip_size_kb": 0, 00:14:30.873 "state": "online", 00:14:30.873 "raid_level": "raid1", 00:14:30.873 "superblock": false, 00:14:30.873 "num_base_bdevs": 4, 00:14:30.873 "num_base_bdevs_discovered": 3, 00:14:30.873 "num_base_bdevs_operational": 3, 00:14:30.873 "process": { 00:14:30.873 "type": "rebuild", 00:14:30.873 "target": "spare", 00:14:30.873 "progress": { 00:14:30.873 "blocks": 49152, 00:14:30.873 "percent": 75 00:14:30.873 } 00:14:30.873 }, 00:14:30.873 "base_bdevs_list": [ 00:14:30.873 { 00:14:30.873 "name": "spare", 00:14:30.873 "uuid": "18d9d7a5-6dd0-580c-85b4-f15383d1cf0d", 00:14:30.873 "is_configured": true, 00:14:30.873 "data_offset": 0, 00:14:30.873 "data_size": 65536 00:14:30.873 }, 00:14:30.873 { 00:14:30.873 "name": null, 00:14:30.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:30.873 "is_configured": false, 00:14:30.873 "data_offset": 0, 00:14:30.873 "data_size": 65536 00:14:30.873 }, 00:14:30.873 { 00:14:30.873 "name": "BaseBdev3", 00:14:30.873 "uuid": "3e5ddf72-ca35-5978-abf3-dda59f5204f8", 00:14:30.873 "is_configured": true, 00:14:30.873 "data_offset": 0, 00:14:30.873 "data_size": 65536 00:14:30.873 }, 00:14:30.873 { 00:14:30.873 "name": "BaseBdev4", 00:14:30.873 "uuid": "bceea1fc-4e7f-589f-95c2-137b58fa6332", 00:14:30.873 "is_configured": true, 00:14:30.873 "data_offset": 0, 00:14:30.873 "data_size": 65536 00:14:30.873 } 00:14:30.873 ] 00:14:30.873 }' 00:14:30.873 03:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:30.873 03:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:30.873 03:17:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:30.873 03:17:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:30.873 03:17:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:31.442 [2024-10-09 03:17:14.629666] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:31.442 [2024-10-09 03:17:14.629846] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:31.442 [2024-10-09 03:17:14.629904] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.011 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:32.011 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:32.011 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.011 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:32.011 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:32.011 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.011 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.011 03:17:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.011 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.011 03:17:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.011 03:17:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.011 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.011 "name": "raid_bdev1", 00:14:32.011 "uuid": "bdf4a304-2d4c-44c2-a7d4-b32dbb44a99d", 00:14:32.011 "strip_size_kb": 0, 00:14:32.011 "state": "online", 00:14:32.011 "raid_level": "raid1", 00:14:32.011 "superblock": false, 00:14:32.011 "num_base_bdevs": 4, 00:14:32.011 "num_base_bdevs_discovered": 3, 00:14:32.011 "num_base_bdevs_operational": 3, 00:14:32.011 "base_bdevs_list": [ 00:14:32.011 { 00:14:32.011 "name": "spare", 00:14:32.011 "uuid": "18d9d7a5-6dd0-580c-85b4-f15383d1cf0d", 00:14:32.011 "is_configured": true, 00:14:32.012 "data_offset": 0, 00:14:32.012 "data_size": 65536 00:14:32.012 }, 00:14:32.012 { 00:14:32.012 "name": null, 00:14:32.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.012 "is_configured": false, 00:14:32.012 "data_offset": 0, 00:14:32.012 "data_size": 65536 00:14:32.012 }, 00:14:32.012 { 00:14:32.012 "name": "BaseBdev3", 00:14:32.012 "uuid": "3e5ddf72-ca35-5978-abf3-dda59f5204f8", 00:14:32.012 "is_configured": true, 00:14:32.012 "data_offset": 0, 00:14:32.012 "data_size": 65536 00:14:32.012 }, 00:14:32.012 { 00:14:32.012 "name": "BaseBdev4", 00:14:32.012 "uuid": "bceea1fc-4e7f-589f-95c2-137b58fa6332", 00:14:32.012 "is_configured": true, 00:14:32.012 "data_offset": 0, 00:14:32.012 "data_size": 65536 00:14:32.012 } 00:14:32.012 ] 00:14:32.012 }' 00:14:32.012 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.012 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:32.012 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.012 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:32.012 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:32.012 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:32.012 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:32.012 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:32.012 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:32.012 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:32.012 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.012 03:17:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.012 03:17:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.012 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.012 03:17:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.012 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:32.012 "name": "raid_bdev1", 00:14:32.012 "uuid": "bdf4a304-2d4c-44c2-a7d4-b32dbb44a99d", 00:14:32.012 "strip_size_kb": 0, 00:14:32.012 "state": "online", 00:14:32.012 "raid_level": "raid1", 00:14:32.012 "superblock": false, 00:14:32.012 "num_base_bdevs": 4, 00:14:32.012 "num_base_bdevs_discovered": 3, 00:14:32.012 "num_base_bdevs_operational": 3, 00:14:32.012 "base_bdevs_list": [ 00:14:32.012 { 00:14:32.012 "name": "spare", 00:14:32.012 "uuid": "18d9d7a5-6dd0-580c-85b4-f15383d1cf0d", 00:14:32.012 "is_configured": true, 00:14:32.012 "data_offset": 0, 00:14:32.012 "data_size": 65536 00:14:32.012 }, 00:14:32.012 { 00:14:32.012 "name": null, 00:14:32.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.012 "is_configured": false, 00:14:32.012 "data_offset": 0, 00:14:32.012 "data_size": 65536 00:14:32.012 }, 00:14:32.012 { 00:14:32.012 "name": "BaseBdev3", 00:14:32.012 "uuid": "3e5ddf72-ca35-5978-abf3-dda59f5204f8", 00:14:32.012 "is_configured": true, 00:14:32.012 "data_offset": 0, 00:14:32.012 "data_size": 65536 00:14:32.012 }, 00:14:32.012 { 00:14:32.012 "name": "BaseBdev4", 00:14:32.012 "uuid": "bceea1fc-4e7f-589f-95c2-137b58fa6332", 00:14:32.012 "is_configured": true, 00:14:32.012 "data_offset": 0, 00:14:32.012 "data_size": 65536 00:14:32.012 } 00:14:32.012 ] 00:14:32.012 }' 00:14:32.012 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:32.012 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:32.012 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:32.012 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:32.012 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:32.012 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.012 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.012 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:32.012 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:32.012 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:32.012 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.012 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.012 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.012 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.012 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.012 03:17:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.012 03:17:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.012 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.271 03:17:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.271 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.271 "name": "raid_bdev1", 00:14:32.271 "uuid": "bdf4a304-2d4c-44c2-a7d4-b32dbb44a99d", 00:14:32.271 "strip_size_kb": 0, 00:14:32.271 "state": "online", 00:14:32.271 "raid_level": "raid1", 00:14:32.271 "superblock": false, 00:14:32.271 "num_base_bdevs": 4, 00:14:32.271 "num_base_bdevs_discovered": 3, 00:14:32.271 "num_base_bdevs_operational": 3, 00:14:32.271 "base_bdevs_list": [ 00:14:32.271 { 00:14:32.271 "name": "spare", 00:14:32.271 "uuid": "18d9d7a5-6dd0-580c-85b4-f15383d1cf0d", 00:14:32.271 "is_configured": true, 00:14:32.271 "data_offset": 0, 00:14:32.271 "data_size": 65536 00:14:32.271 }, 00:14:32.271 { 00:14:32.271 "name": null, 00:14:32.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.271 "is_configured": false, 00:14:32.271 "data_offset": 0, 00:14:32.271 "data_size": 65536 00:14:32.271 }, 00:14:32.271 { 00:14:32.271 "name": "BaseBdev3", 00:14:32.271 "uuid": "3e5ddf72-ca35-5978-abf3-dda59f5204f8", 00:14:32.271 "is_configured": true, 00:14:32.271 "data_offset": 0, 00:14:32.271 "data_size": 65536 00:14:32.271 }, 00:14:32.271 { 00:14:32.271 "name": "BaseBdev4", 00:14:32.271 "uuid": "bceea1fc-4e7f-589f-95c2-137b58fa6332", 00:14:32.271 "is_configured": true, 00:14:32.271 "data_offset": 0, 00:14:32.271 "data_size": 65536 00:14:32.271 } 00:14:32.271 ] 00:14:32.271 }' 00:14:32.271 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.271 03:17:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.531 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:32.531 03:17:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.531 03:17:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.531 [2024-10-09 03:17:15.734833] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:32.531 [2024-10-09 03:17:15.734959] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:32.531 [2024-10-09 03:17:15.735072] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:32.531 [2024-10-09 03:17:15.735181] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:32.531 [2024-10-09 03:17:15.735258] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:32.531 03:17:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.531 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.531 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:32.531 03:17:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.531 03:17:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.531 03:17:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.531 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:32.531 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:32.531 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:32.531 03:17:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:32.531 03:17:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:32.531 03:17:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:32.531 03:17:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:32.531 03:17:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:32.531 03:17:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:32.531 03:17:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:32.531 03:17:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:32.531 03:17:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:32.531 03:17:15 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:32.795 /dev/nbd0 00:14:32.795 03:17:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:32.795 03:17:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:32.795 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:32.795 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:32.795 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:32.795 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:32.795 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:32.795 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:32.795 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:32.795 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:32.795 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:32.795 1+0 records in 00:14:32.795 1+0 records out 00:14:32.795 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000393719 s, 10.4 MB/s 00:14:32.795 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:32.795 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:32.795 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:32.795 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:32.795 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:32.795 03:17:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:32.795 03:17:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:32.795 03:17:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:33.061 /dev/nbd1 00:14:33.061 03:17:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:33.061 03:17:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:33.061 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:33.061 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:14:33.061 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:33.061 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:33.061 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:33.061 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # break 00:14:33.061 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:33.061 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:33.061 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:33.061 1+0 records in 00:14:33.061 1+0 records out 00:14:33.061 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000559928 s, 7.3 MB/s 00:14:33.061 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.061 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:14:33.061 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:33.061 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:33.061 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:14:33.061 03:17:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:33.061 03:17:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:33.061 03:17:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:33.320 03:17:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:33.320 03:17:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:33.320 03:17:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:33.320 03:17:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:33.320 03:17:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:33.320 03:17:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:33.320 03:17:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:33.580 03:17:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:33.580 03:17:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:33.580 03:17:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:33.580 03:17:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:33.580 03:17:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:33.580 03:17:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:33.580 03:17:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:33.580 03:17:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:33.580 03:17:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:33.580 03:17:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:33.580 03:17:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:33.580 03:17:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:33.580 03:17:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:33.580 03:17:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:33.580 03:17:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:33.580 03:17:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:33.580 03:17:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:33.580 03:17:16 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:33.580 03:17:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:33.580 03:17:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77786 00:14:33.580 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 77786 ']' 00:14:33.580 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 77786 00:14:33.580 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:14:33.840 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:33.840 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 77786 00:14:33.840 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:33.840 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:33.840 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 77786' 00:14:33.840 killing process with pid 77786 00:14:33.840 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@969 -- # kill 77786 00:14:33.840 Received shutdown signal, test time was about 60.000000 seconds 00:14:33.840 00:14:33.840 Latency(us) 00:14:33.840 [2024-10-09T03:17:17.143Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.840 [2024-10-09T03:17:17.143Z] =================================================================================================================== 00:14:33.840 [2024-10-09T03:17:17.143Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:33.840 [2024-10-09 03:17:16.912815] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:33.840 03:17:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@974 -- # wait 77786 00:14:34.409 [2024-10-09 03:17:17.464552] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:35.789 ************************************ 00:14:35.789 END TEST raid_rebuild_test 00:14:35.789 ************************************ 00:14:35.789 03:17:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:35.789 00:14:35.789 real 0m17.526s 00:14:35.789 user 0m19.270s 00:14:35.789 sys 0m3.193s 00:14:35.789 03:17:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:35.789 03:17:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:35.789 03:17:18 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:14:35.789 03:17:18 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:14:35.789 03:17:18 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:35.789 03:17:18 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:35.789 ************************************ 00:14:35.789 START TEST raid_rebuild_test_sb 00:14:35.789 ************************************ 00:14:35.789 03:17:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true false true 00:14:35.789 03:17:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:35.789 03:17:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:14:35.789 03:17:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:35.789 03:17:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:35.789 03:17:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:35.789 03:17:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:35.789 03:17:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:35.789 03:17:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:35.789 03:17:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:35.789 03:17:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:35.789 03:17:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:35.789 03:17:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:35.789 03:17:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:35.789 03:17:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:14:35.789 03:17:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:35.789 03:17:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:35.789 03:17:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:14:35.789 03:17:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:35.789 03:17:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:35.789 03:17:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:35.789 03:17:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:35.789 03:17:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:35.789 03:17:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:35.789 03:17:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:35.789 03:17:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:35.789 03:17:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:35.789 03:17:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:35.789 03:17:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:35.789 03:17:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:35.789 03:17:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:35.789 03:17:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78238 00:14:35.789 03:17:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:35.789 03:17:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78238 00:14:35.789 03:17:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 78238 ']' 00:14:35.789 03:17:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.789 03:17:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:35.789 03:17:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.789 03:17:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:35.789 03:17:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.789 [2024-10-09 03:17:19.087585] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:14:35.789 [2024-10-09 03:17:19.087783] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:35.789 Zero copy mechanism will not be used. 00:14:35.789 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78238 ] 00:14:36.048 [2024-10-09 03:17:19.250239] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:36.326 [2024-10-09 03:17:19.515225] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:14:36.605 [2024-10-09 03:17:19.771017] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:36.605 [2024-10-09 03:17:19.771072] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:36.865 03:17:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:36.865 03:17:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:14:36.865 03:17:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:36.865 03:17:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:36.865 03:17:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.865 03:17:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.865 BaseBdev1_malloc 00:14:36.865 03:17:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.865 03:17:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:36.865 03:17:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.865 03:17:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.865 [2024-10-09 03:17:20.003706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:36.865 [2024-10-09 03:17:20.003885] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.865 [2024-10-09 03:17:20.003932] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:36.865 [2024-10-09 03:17:20.003971] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.865 [2024-10-09 03:17:20.006415] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.865 [2024-10-09 03:17:20.006497] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:36.865 BaseBdev1 00:14:36.865 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.865 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:36.865 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:36.865 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.865 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.865 BaseBdev2_malloc 00:14:36.865 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.865 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:36.865 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.865 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.865 [2024-10-09 03:17:20.075081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:36.865 [2024-10-09 03:17:20.075218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.865 [2024-10-09 03:17:20.075257] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:36.865 [2024-10-09 03:17:20.075290] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.865 [2024-10-09 03:17:20.077969] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.865 [2024-10-09 03:17:20.078049] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:36.865 BaseBdev2 00:14:36.865 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.865 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:36.865 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:36.865 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.865 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.865 BaseBdev3_malloc 00:14:36.865 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.865 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:14:36.865 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.865 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.865 [2024-10-09 03:17:20.137796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:14:36.865 [2024-10-09 03:17:20.137956] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:36.865 [2024-10-09 03:17:20.137998] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:14:36.865 [2024-10-09 03:17:20.138030] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:36.865 [2024-10-09 03:17:20.140381] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:36.865 [2024-10-09 03:17:20.140459] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:36.865 BaseBdev3 00:14:36.865 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.865 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:36.865 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:36.865 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.865 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.125 BaseBdev4_malloc 00:14:37.125 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.125 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:14:37.125 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.125 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.125 [2024-10-09 03:17:20.198806] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:14:37.125 [2024-10-09 03:17:20.198941] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.125 [2024-10-09 03:17:20.198966] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:14:37.125 [2024-10-09 03:17:20.198978] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.125 [2024-10-09 03:17:20.201304] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.125 [2024-10-09 03:17:20.201345] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:37.125 BaseBdev4 00:14:37.125 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.125 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:37.125 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.125 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.125 spare_malloc 00:14:37.125 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.125 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:37.125 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.125 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.125 spare_delay 00:14:37.125 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.125 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:37.125 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.125 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.125 [2024-10-09 03:17:20.272292] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:37.125 [2024-10-09 03:17:20.272406] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:37.125 [2024-10-09 03:17:20.272443] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:14:37.125 [2024-10-09 03:17:20.272475] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:37.125 [2024-10-09 03:17:20.274861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:37.125 [2024-10-09 03:17:20.274933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:37.125 spare 00:14:37.125 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.125 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:14:37.125 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.125 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.125 [2024-10-09 03:17:20.284338] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:37.125 [2024-10-09 03:17:20.286395] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:37.125 [2024-10-09 03:17:20.286508] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:37.125 [2024-10-09 03:17:20.286569] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:37.126 [2024-10-09 03:17:20.286755] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:37.126 [2024-10-09 03:17:20.286769] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:37.126 [2024-10-09 03:17:20.287032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:37.126 [2024-10-09 03:17:20.287213] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:37.126 [2024-10-09 03:17:20.287223] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:37.126 [2024-10-09 03:17:20.287361] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:37.126 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.126 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:37.126 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:37.126 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:37.126 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:37.126 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:37.126 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:37.126 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:37.126 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:37.126 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:37.126 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:37.126 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.126 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:37.126 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.126 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.126 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.126 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:37.126 "name": "raid_bdev1", 00:14:37.126 "uuid": "c50bd912-57f3-464c-9f1e-af83d885bce9", 00:14:37.126 "strip_size_kb": 0, 00:14:37.126 "state": "online", 00:14:37.126 "raid_level": "raid1", 00:14:37.126 "superblock": true, 00:14:37.126 "num_base_bdevs": 4, 00:14:37.126 "num_base_bdevs_discovered": 4, 00:14:37.126 "num_base_bdevs_operational": 4, 00:14:37.126 "base_bdevs_list": [ 00:14:37.126 { 00:14:37.126 "name": "BaseBdev1", 00:14:37.126 "uuid": "6483c889-a0fb-503d-81ab-1639bbec4110", 00:14:37.126 "is_configured": true, 00:14:37.126 "data_offset": 2048, 00:14:37.126 "data_size": 63488 00:14:37.126 }, 00:14:37.126 { 00:14:37.126 "name": "BaseBdev2", 00:14:37.126 "uuid": "351e8d7a-4dc1-5348-b9e3-96955c5096f9", 00:14:37.126 "is_configured": true, 00:14:37.126 "data_offset": 2048, 00:14:37.126 "data_size": 63488 00:14:37.126 }, 00:14:37.126 { 00:14:37.126 "name": "BaseBdev3", 00:14:37.126 "uuid": "11ecc296-1314-546f-a769-0a7620bd0d07", 00:14:37.126 "is_configured": true, 00:14:37.126 "data_offset": 2048, 00:14:37.126 "data_size": 63488 00:14:37.126 }, 00:14:37.126 { 00:14:37.126 "name": "BaseBdev4", 00:14:37.126 "uuid": "367a3b84-f3c6-5e7d-8026-4d57f6687e4e", 00:14:37.126 "is_configured": true, 00:14:37.126 "data_offset": 2048, 00:14:37.126 "data_size": 63488 00:14:37.126 } 00:14:37.126 ] 00:14:37.126 }' 00:14:37.126 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:37.126 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.696 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:37.696 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.696 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.696 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:37.696 [2024-10-09 03:17:20.739935] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:37.696 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.696 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:37.696 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:37.696 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:37.696 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.696 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:37.696 03:17:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.696 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:37.696 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:37.696 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:37.696 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:37.696 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:37.696 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:37.696 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:37.696 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:37.696 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:37.696 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:37.696 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:37.696 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:37.696 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:37.696 03:17:20 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:37.696 [2024-10-09 03:17:20.991134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:37.956 /dev/nbd0 00:14:37.956 03:17:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:37.956 03:17:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:37.956 03:17:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:37.956 03:17:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:37.956 03:17:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:37.956 03:17:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:37.956 03:17:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:37.956 03:17:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:37.956 03:17:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:37.956 03:17:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:37.956 03:17:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:37.956 1+0 records in 00:14:37.956 1+0 records out 00:14:37.956 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0003693 s, 11.1 MB/s 00:14:37.956 03:17:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.956 03:17:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:37.956 03:17:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:37.956 03:17:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:37.956 03:17:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:37.956 03:17:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:37.956 03:17:21 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:37.956 03:17:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:37.956 03:17:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:37.956 03:17:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:44.531 63488+0 records in 00:14:44.531 63488+0 records out 00:14:44.531 32505856 bytes (33 MB, 31 MiB) copied, 5.97024 s, 5.4 MB/s 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:44.531 [2024-10-09 03:17:27.236297] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.531 [2024-10-09 03:17:27.268328] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.531 "name": "raid_bdev1", 00:14:44.531 "uuid": "c50bd912-57f3-464c-9f1e-af83d885bce9", 00:14:44.531 "strip_size_kb": 0, 00:14:44.531 "state": "online", 00:14:44.531 "raid_level": "raid1", 00:14:44.531 "superblock": true, 00:14:44.531 "num_base_bdevs": 4, 00:14:44.531 "num_base_bdevs_discovered": 3, 00:14:44.531 "num_base_bdevs_operational": 3, 00:14:44.531 "base_bdevs_list": [ 00:14:44.531 { 00:14:44.531 "name": null, 00:14:44.531 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.531 "is_configured": false, 00:14:44.531 "data_offset": 0, 00:14:44.531 "data_size": 63488 00:14:44.531 }, 00:14:44.531 { 00:14:44.531 "name": "BaseBdev2", 00:14:44.531 "uuid": "351e8d7a-4dc1-5348-b9e3-96955c5096f9", 00:14:44.531 "is_configured": true, 00:14:44.531 "data_offset": 2048, 00:14:44.531 "data_size": 63488 00:14:44.531 }, 00:14:44.531 { 00:14:44.531 "name": "BaseBdev3", 00:14:44.531 "uuid": "11ecc296-1314-546f-a769-0a7620bd0d07", 00:14:44.531 "is_configured": true, 00:14:44.531 "data_offset": 2048, 00:14:44.531 "data_size": 63488 00:14:44.531 }, 00:14:44.531 { 00:14:44.531 "name": "BaseBdev4", 00:14:44.531 "uuid": "367a3b84-f3c6-5e7d-8026-4d57f6687e4e", 00:14:44.531 "is_configured": true, 00:14:44.531 "data_offset": 2048, 00:14:44.531 "data_size": 63488 00:14:44.531 } 00:14:44.531 ] 00:14:44.531 }' 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:44.531 03:17:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:44.532 03:17:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.532 [2024-10-09 03:17:27.712023] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:44.532 [2024-10-09 03:17:27.726113] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:14:44.532 03:17:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:44.532 03:17:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:44.532 [2024-10-09 03:17:27.728447] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:45.471 03:17:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:45.471 03:17:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.471 03:17:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:45.471 03:17:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:45.471 03:17:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.471 03:17:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.471 03:17:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.471 03:17:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.471 03:17:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.471 03:17:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.731 03:17:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.731 "name": "raid_bdev1", 00:14:45.731 "uuid": "c50bd912-57f3-464c-9f1e-af83d885bce9", 00:14:45.731 "strip_size_kb": 0, 00:14:45.731 "state": "online", 00:14:45.731 "raid_level": "raid1", 00:14:45.731 "superblock": true, 00:14:45.731 "num_base_bdevs": 4, 00:14:45.731 "num_base_bdevs_discovered": 4, 00:14:45.731 "num_base_bdevs_operational": 4, 00:14:45.731 "process": { 00:14:45.731 "type": "rebuild", 00:14:45.731 "target": "spare", 00:14:45.731 "progress": { 00:14:45.731 "blocks": 20480, 00:14:45.731 "percent": 32 00:14:45.731 } 00:14:45.731 }, 00:14:45.731 "base_bdevs_list": [ 00:14:45.731 { 00:14:45.731 "name": "spare", 00:14:45.731 "uuid": "5e87bb0f-ce2a-55ab-9299-65a42d9604b6", 00:14:45.731 "is_configured": true, 00:14:45.731 "data_offset": 2048, 00:14:45.731 "data_size": 63488 00:14:45.731 }, 00:14:45.731 { 00:14:45.731 "name": "BaseBdev2", 00:14:45.731 "uuid": "351e8d7a-4dc1-5348-b9e3-96955c5096f9", 00:14:45.731 "is_configured": true, 00:14:45.731 "data_offset": 2048, 00:14:45.731 "data_size": 63488 00:14:45.731 }, 00:14:45.731 { 00:14:45.731 "name": "BaseBdev3", 00:14:45.731 "uuid": "11ecc296-1314-546f-a769-0a7620bd0d07", 00:14:45.731 "is_configured": true, 00:14:45.731 "data_offset": 2048, 00:14:45.731 "data_size": 63488 00:14:45.731 }, 00:14:45.731 { 00:14:45.731 "name": "BaseBdev4", 00:14:45.731 "uuid": "367a3b84-f3c6-5e7d-8026-4d57f6687e4e", 00:14:45.731 "is_configured": true, 00:14:45.731 "data_offset": 2048, 00:14:45.731 "data_size": 63488 00:14:45.731 } 00:14:45.731 ] 00:14:45.731 }' 00:14:45.731 03:17:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.731 03:17:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:45.731 03:17:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.731 03:17:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:45.731 03:17:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:45.731 03:17:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.731 03:17:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.731 [2024-10-09 03:17:28.893265] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:45.731 [2024-10-09 03:17:28.938042] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:45.731 [2024-10-09 03:17:28.938169] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:45.731 [2024-10-09 03:17:28.938210] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:45.731 [2024-10-09 03:17:28.938236] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:45.731 03:17:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.731 03:17:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:45.731 03:17:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:45.731 03:17:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:45.731 03:17:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:45.731 03:17:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:45.731 03:17:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:45.731 03:17:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:45.731 03:17:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:45.731 03:17:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:45.731 03:17:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:45.731 03:17:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.731 03:17:28 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.731 03:17:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.731 03:17:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.731 03:17:28 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.731 03:17:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:45.731 "name": "raid_bdev1", 00:14:45.731 "uuid": "c50bd912-57f3-464c-9f1e-af83d885bce9", 00:14:45.731 "strip_size_kb": 0, 00:14:45.731 "state": "online", 00:14:45.731 "raid_level": "raid1", 00:14:45.731 "superblock": true, 00:14:45.731 "num_base_bdevs": 4, 00:14:45.731 "num_base_bdevs_discovered": 3, 00:14:45.731 "num_base_bdevs_operational": 3, 00:14:45.731 "base_bdevs_list": [ 00:14:45.731 { 00:14:45.731 "name": null, 00:14:45.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.731 "is_configured": false, 00:14:45.731 "data_offset": 0, 00:14:45.731 "data_size": 63488 00:14:45.731 }, 00:14:45.731 { 00:14:45.731 "name": "BaseBdev2", 00:14:45.731 "uuid": "351e8d7a-4dc1-5348-b9e3-96955c5096f9", 00:14:45.731 "is_configured": true, 00:14:45.731 "data_offset": 2048, 00:14:45.731 "data_size": 63488 00:14:45.731 }, 00:14:45.731 { 00:14:45.731 "name": "BaseBdev3", 00:14:45.731 "uuid": "11ecc296-1314-546f-a769-0a7620bd0d07", 00:14:45.731 "is_configured": true, 00:14:45.731 "data_offset": 2048, 00:14:45.731 "data_size": 63488 00:14:45.731 }, 00:14:45.731 { 00:14:45.731 "name": "BaseBdev4", 00:14:45.731 "uuid": "367a3b84-f3c6-5e7d-8026-4d57f6687e4e", 00:14:45.731 "is_configured": true, 00:14:45.731 "data_offset": 2048, 00:14:45.731 "data_size": 63488 00:14:45.731 } 00:14:45.731 ] 00:14:45.731 }' 00:14:45.731 03:17:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:45.731 03:17:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.301 03:17:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:46.301 03:17:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:46.301 03:17:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:46.301 03:17:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:46.301 03:17:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:46.301 03:17:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.301 03:17:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.301 03:17:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.301 03:17:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.301 03:17:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.301 03:17:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:46.301 "name": "raid_bdev1", 00:14:46.301 "uuid": "c50bd912-57f3-464c-9f1e-af83d885bce9", 00:14:46.301 "strip_size_kb": 0, 00:14:46.301 "state": "online", 00:14:46.301 "raid_level": "raid1", 00:14:46.301 "superblock": true, 00:14:46.301 "num_base_bdevs": 4, 00:14:46.301 "num_base_bdevs_discovered": 3, 00:14:46.301 "num_base_bdevs_operational": 3, 00:14:46.301 "base_bdevs_list": [ 00:14:46.301 { 00:14:46.301 "name": null, 00:14:46.301 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.301 "is_configured": false, 00:14:46.301 "data_offset": 0, 00:14:46.301 "data_size": 63488 00:14:46.301 }, 00:14:46.301 { 00:14:46.301 "name": "BaseBdev2", 00:14:46.301 "uuid": "351e8d7a-4dc1-5348-b9e3-96955c5096f9", 00:14:46.301 "is_configured": true, 00:14:46.301 "data_offset": 2048, 00:14:46.301 "data_size": 63488 00:14:46.301 }, 00:14:46.301 { 00:14:46.301 "name": "BaseBdev3", 00:14:46.301 "uuid": "11ecc296-1314-546f-a769-0a7620bd0d07", 00:14:46.301 "is_configured": true, 00:14:46.301 "data_offset": 2048, 00:14:46.301 "data_size": 63488 00:14:46.301 }, 00:14:46.301 { 00:14:46.301 "name": "BaseBdev4", 00:14:46.301 "uuid": "367a3b84-f3c6-5e7d-8026-4d57f6687e4e", 00:14:46.301 "is_configured": true, 00:14:46.301 "data_offset": 2048, 00:14:46.301 "data_size": 63488 00:14:46.301 } 00:14:46.301 ] 00:14:46.301 }' 00:14:46.301 03:17:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:46.301 03:17:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:46.301 03:17:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:46.301 03:17:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:46.301 03:17:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:46.301 03:17:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.301 03:17:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.301 [2024-10-09 03:17:29.522976] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:46.301 [2024-10-09 03:17:29.539444] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:14:46.301 03:17:29 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.301 03:17:29 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:46.301 [2024-10-09 03:17:29.542108] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:47.681 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:47.681 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.681 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:47.681 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:47.681 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.681 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.681 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.681 03:17:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.681 03:17:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.681 03:17:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.681 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.681 "name": "raid_bdev1", 00:14:47.681 "uuid": "c50bd912-57f3-464c-9f1e-af83d885bce9", 00:14:47.681 "strip_size_kb": 0, 00:14:47.681 "state": "online", 00:14:47.681 "raid_level": "raid1", 00:14:47.681 "superblock": true, 00:14:47.681 "num_base_bdevs": 4, 00:14:47.681 "num_base_bdevs_discovered": 4, 00:14:47.681 "num_base_bdevs_operational": 4, 00:14:47.681 "process": { 00:14:47.681 "type": "rebuild", 00:14:47.681 "target": "spare", 00:14:47.681 "progress": { 00:14:47.681 "blocks": 20480, 00:14:47.681 "percent": 32 00:14:47.681 } 00:14:47.681 }, 00:14:47.681 "base_bdevs_list": [ 00:14:47.681 { 00:14:47.681 "name": "spare", 00:14:47.681 "uuid": "5e87bb0f-ce2a-55ab-9299-65a42d9604b6", 00:14:47.681 "is_configured": true, 00:14:47.681 "data_offset": 2048, 00:14:47.681 "data_size": 63488 00:14:47.681 }, 00:14:47.681 { 00:14:47.681 "name": "BaseBdev2", 00:14:47.681 "uuid": "351e8d7a-4dc1-5348-b9e3-96955c5096f9", 00:14:47.681 "is_configured": true, 00:14:47.681 "data_offset": 2048, 00:14:47.681 "data_size": 63488 00:14:47.681 }, 00:14:47.681 { 00:14:47.681 "name": "BaseBdev3", 00:14:47.681 "uuid": "11ecc296-1314-546f-a769-0a7620bd0d07", 00:14:47.681 "is_configured": true, 00:14:47.681 "data_offset": 2048, 00:14:47.681 "data_size": 63488 00:14:47.681 }, 00:14:47.681 { 00:14:47.681 "name": "BaseBdev4", 00:14:47.681 "uuid": "367a3b84-f3c6-5e7d-8026-4d57f6687e4e", 00:14:47.681 "is_configured": true, 00:14:47.681 "data_offset": 2048, 00:14:47.681 "data_size": 63488 00:14:47.681 } 00:14:47.681 ] 00:14:47.681 }' 00:14:47.681 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.681 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:47.681 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.681 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:47.681 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:47.681 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:47.681 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:47.681 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:14:47.681 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:47.681 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:14:47.681 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:14:47.681 03:17:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.681 03:17:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.681 [2024-10-09 03:17:30.674586] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:47.681 [2024-10-09 03:17:30.852377] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:14:47.681 03:17:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.681 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:14:47.681 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:14:47.681 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:47.681 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.681 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:47.681 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:47.682 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.682 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.682 03:17:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.682 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.682 03:17:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.682 03:17:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.682 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.682 "name": "raid_bdev1", 00:14:47.682 "uuid": "c50bd912-57f3-464c-9f1e-af83d885bce9", 00:14:47.682 "strip_size_kb": 0, 00:14:47.682 "state": "online", 00:14:47.682 "raid_level": "raid1", 00:14:47.682 "superblock": true, 00:14:47.682 "num_base_bdevs": 4, 00:14:47.682 "num_base_bdevs_discovered": 3, 00:14:47.682 "num_base_bdevs_operational": 3, 00:14:47.682 "process": { 00:14:47.682 "type": "rebuild", 00:14:47.682 "target": "spare", 00:14:47.682 "progress": { 00:14:47.682 "blocks": 24576, 00:14:47.682 "percent": 38 00:14:47.682 } 00:14:47.682 }, 00:14:47.682 "base_bdevs_list": [ 00:14:47.682 { 00:14:47.682 "name": "spare", 00:14:47.682 "uuid": "5e87bb0f-ce2a-55ab-9299-65a42d9604b6", 00:14:47.682 "is_configured": true, 00:14:47.682 "data_offset": 2048, 00:14:47.682 "data_size": 63488 00:14:47.682 }, 00:14:47.682 { 00:14:47.682 "name": null, 00:14:47.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.682 "is_configured": false, 00:14:47.682 "data_offset": 0, 00:14:47.682 "data_size": 63488 00:14:47.682 }, 00:14:47.682 { 00:14:47.682 "name": "BaseBdev3", 00:14:47.682 "uuid": "11ecc296-1314-546f-a769-0a7620bd0d07", 00:14:47.682 "is_configured": true, 00:14:47.682 "data_offset": 2048, 00:14:47.682 "data_size": 63488 00:14:47.682 }, 00:14:47.682 { 00:14:47.682 "name": "BaseBdev4", 00:14:47.682 "uuid": "367a3b84-f3c6-5e7d-8026-4d57f6687e4e", 00:14:47.682 "is_configured": true, 00:14:47.682 "data_offset": 2048, 00:14:47.682 "data_size": 63488 00:14:47.682 } 00:14:47.682 ] 00:14:47.682 }' 00:14:47.682 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.682 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:47.682 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.682 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:47.682 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=477 00:14:47.682 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:47.682 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:47.682 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.682 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:47.682 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:47.682 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.682 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.942 03:17:30 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.942 03:17:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:47.942 03:17:30 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.942 03:17:31 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:47.942 03:17:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.942 "name": "raid_bdev1", 00:14:47.942 "uuid": "c50bd912-57f3-464c-9f1e-af83d885bce9", 00:14:47.942 "strip_size_kb": 0, 00:14:47.942 "state": "online", 00:14:47.942 "raid_level": "raid1", 00:14:47.942 "superblock": true, 00:14:47.942 "num_base_bdevs": 4, 00:14:47.942 "num_base_bdevs_discovered": 3, 00:14:47.942 "num_base_bdevs_operational": 3, 00:14:47.942 "process": { 00:14:47.942 "type": "rebuild", 00:14:47.942 "target": "spare", 00:14:47.942 "progress": { 00:14:47.942 "blocks": 26624, 00:14:47.942 "percent": 41 00:14:47.942 } 00:14:47.942 }, 00:14:47.942 "base_bdevs_list": [ 00:14:47.942 { 00:14:47.942 "name": "spare", 00:14:47.942 "uuid": "5e87bb0f-ce2a-55ab-9299-65a42d9604b6", 00:14:47.942 "is_configured": true, 00:14:47.942 "data_offset": 2048, 00:14:47.942 "data_size": 63488 00:14:47.942 }, 00:14:47.942 { 00:14:47.942 "name": null, 00:14:47.942 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.942 "is_configured": false, 00:14:47.942 "data_offset": 0, 00:14:47.942 "data_size": 63488 00:14:47.942 }, 00:14:47.942 { 00:14:47.942 "name": "BaseBdev3", 00:14:47.942 "uuid": "11ecc296-1314-546f-a769-0a7620bd0d07", 00:14:47.942 "is_configured": true, 00:14:47.942 "data_offset": 2048, 00:14:47.942 "data_size": 63488 00:14:47.942 }, 00:14:47.942 { 00:14:47.942 "name": "BaseBdev4", 00:14:47.942 "uuid": "367a3b84-f3c6-5e7d-8026-4d57f6687e4e", 00:14:47.942 "is_configured": true, 00:14:47.942 "data_offset": 2048, 00:14:47.942 "data_size": 63488 00:14:47.942 } 00:14:47.942 ] 00:14:47.942 }' 00:14:47.942 03:17:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.942 03:17:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:47.942 03:17:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.942 03:17:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:47.942 03:17:31 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:48.881 03:17:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:48.881 03:17:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:48.881 03:17:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.881 03:17:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:48.881 03:17:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:48.881 03:17:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.881 03:17:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.881 03:17:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.881 03:17:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.881 03:17:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.881 03:17:32 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.881 03:17:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.881 "name": "raid_bdev1", 00:14:48.881 "uuid": "c50bd912-57f3-464c-9f1e-af83d885bce9", 00:14:48.881 "strip_size_kb": 0, 00:14:48.881 "state": "online", 00:14:48.881 "raid_level": "raid1", 00:14:48.881 "superblock": true, 00:14:48.881 "num_base_bdevs": 4, 00:14:48.881 "num_base_bdevs_discovered": 3, 00:14:48.881 "num_base_bdevs_operational": 3, 00:14:48.881 "process": { 00:14:48.881 "type": "rebuild", 00:14:48.881 "target": "spare", 00:14:48.881 "progress": { 00:14:48.881 "blocks": 49152, 00:14:48.881 "percent": 77 00:14:48.881 } 00:14:48.881 }, 00:14:48.881 "base_bdevs_list": [ 00:14:48.881 { 00:14:48.881 "name": "spare", 00:14:48.881 "uuid": "5e87bb0f-ce2a-55ab-9299-65a42d9604b6", 00:14:48.881 "is_configured": true, 00:14:48.881 "data_offset": 2048, 00:14:48.881 "data_size": 63488 00:14:48.881 }, 00:14:48.881 { 00:14:48.881 "name": null, 00:14:48.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.881 "is_configured": false, 00:14:48.881 "data_offset": 0, 00:14:48.881 "data_size": 63488 00:14:48.881 }, 00:14:48.881 { 00:14:48.881 "name": "BaseBdev3", 00:14:48.881 "uuid": "11ecc296-1314-546f-a769-0a7620bd0d07", 00:14:48.881 "is_configured": true, 00:14:48.881 "data_offset": 2048, 00:14:48.881 "data_size": 63488 00:14:48.881 }, 00:14:48.881 { 00:14:48.881 "name": "BaseBdev4", 00:14:48.881 "uuid": "367a3b84-f3c6-5e7d-8026-4d57f6687e4e", 00:14:48.881 "is_configured": true, 00:14:48.881 "data_offset": 2048, 00:14:48.881 "data_size": 63488 00:14:48.881 } 00:14:48.881 ] 00:14:48.881 }' 00:14:48.881 03:17:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.881 03:17:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:48.881 03:17:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:49.141 03:17:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:49.141 03:17:32 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:49.710 [2024-10-09 03:17:32.769249] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:49.710 [2024-10-09 03:17:32.769365] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:49.710 [2024-10-09 03:17:32.769546] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:49.969 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:49.969 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:49.969 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:49.969 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:49.969 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:49.969 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:49.969 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:49.969 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:49.969 03:17:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.969 03:17:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.969 03:17:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.229 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.229 "name": "raid_bdev1", 00:14:50.229 "uuid": "c50bd912-57f3-464c-9f1e-af83d885bce9", 00:14:50.229 "strip_size_kb": 0, 00:14:50.229 "state": "online", 00:14:50.229 "raid_level": "raid1", 00:14:50.229 "superblock": true, 00:14:50.229 "num_base_bdevs": 4, 00:14:50.229 "num_base_bdevs_discovered": 3, 00:14:50.229 "num_base_bdevs_operational": 3, 00:14:50.229 "base_bdevs_list": [ 00:14:50.229 { 00:14:50.229 "name": "spare", 00:14:50.229 "uuid": "5e87bb0f-ce2a-55ab-9299-65a42d9604b6", 00:14:50.229 "is_configured": true, 00:14:50.229 "data_offset": 2048, 00:14:50.229 "data_size": 63488 00:14:50.229 }, 00:14:50.229 { 00:14:50.229 "name": null, 00:14:50.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.229 "is_configured": false, 00:14:50.229 "data_offset": 0, 00:14:50.229 "data_size": 63488 00:14:50.229 }, 00:14:50.229 { 00:14:50.229 "name": "BaseBdev3", 00:14:50.229 "uuid": "11ecc296-1314-546f-a769-0a7620bd0d07", 00:14:50.229 "is_configured": true, 00:14:50.229 "data_offset": 2048, 00:14:50.229 "data_size": 63488 00:14:50.229 }, 00:14:50.229 { 00:14:50.229 "name": "BaseBdev4", 00:14:50.229 "uuid": "367a3b84-f3c6-5e7d-8026-4d57f6687e4e", 00:14:50.229 "is_configured": true, 00:14:50.229 "data_offset": 2048, 00:14:50.229 "data_size": 63488 00:14:50.229 } 00:14:50.229 ] 00:14:50.229 }' 00:14:50.229 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.229 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:50.229 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.229 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:50.229 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:50.229 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:50.229 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:50.229 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:50.229 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:50.229 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:50.229 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.229 03:17:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.229 03:17:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.229 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.229 03:17:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.229 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:50.229 "name": "raid_bdev1", 00:14:50.229 "uuid": "c50bd912-57f3-464c-9f1e-af83d885bce9", 00:14:50.229 "strip_size_kb": 0, 00:14:50.229 "state": "online", 00:14:50.229 "raid_level": "raid1", 00:14:50.229 "superblock": true, 00:14:50.229 "num_base_bdevs": 4, 00:14:50.229 "num_base_bdevs_discovered": 3, 00:14:50.229 "num_base_bdevs_operational": 3, 00:14:50.229 "base_bdevs_list": [ 00:14:50.229 { 00:14:50.229 "name": "spare", 00:14:50.229 "uuid": "5e87bb0f-ce2a-55ab-9299-65a42d9604b6", 00:14:50.229 "is_configured": true, 00:14:50.229 "data_offset": 2048, 00:14:50.229 "data_size": 63488 00:14:50.229 }, 00:14:50.229 { 00:14:50.229 "name": null, 00:14:50.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.229 "is_configured": false, 00:14:50.229 "data_offset": 0, 00:14:50.229 "data_size": 63488 00:14:50.229 }, 00:14:50.229 { 00:14:50.229 "name": "BaseBdev3", 00:14:50.229 "uuid": "11ecc296-1314-546f-a769-0a7620bd0d07", 00:14:50.229 "is_configured": true, 00:14:50.229 "data_offset": 2048, 00:14:50.229 "data_size": 63488 00:14:50.229 }, 00:14:50.229 { 00:14:50.229 "name": "BaseBdev4", 00:14:50.229 "uuid": "367a3b84-f3c6-5e7d-8026-4d57f6687e4e", 00:14:50.229 "is_configured": true, 00:14:50.229 "data_offset": 2048, 00:14:50.229 "data_size": 63488 00:14:50.229 } 00:14:50.229 ] 00:14:50.229 }' 00:14:50.229 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:50.229 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:50.229 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:50.229 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:50.229 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:50.229 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:50.229 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:50.229 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:50.229 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:50.229 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:50.229 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:50.229 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:50.229 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:50.229 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:50.229 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.229 03:17:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.229 03:17:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.229 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:50.229 03:17:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.489 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:50.489 "name": "raid_bdev1", 00:14:50.489 "uuid": "c50bd912-57f3-464c-9f1e-af83d885bce9", 00:14:50.489 "strip_size_kb": 0, 00:14:50.489 "state": "online", 00:14:50.489 "raid_level": "raid1", 00:14:50.489 "superblock": true, 00:14:50.489 "num_base_bdevs": 4, 00:14:50.489 "num_base_bdevs_discovered": 3, 00:14:50.489 "num_base_bdevs_operational": 3, 00:14:50.489 "base_bdevs_list": [ 00:14:50.489 { 00:14:50.489 "name": "spare", 00:14:50.489 "uuid": "5e87bb0f-ce2a-55ab-9299-65a42d9604b6", 00:14:50.489 "is_configured": true, 00:14:50.489 "data_offset": 2048, 00:14:50.489 "data_size": 63488 00:14:50.489 }, 00:14:50.489 { 00:14:50.489 "name": null, 00:14:50.489 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:50.489 "is_configured": false, 00:14:50.489 "data_offset": 0, 00:14:50.489 "data_size": 63488 00:14:50.489 }, 00:14:50.489 { 00:14:50.489 "name": "BaseBdev3", 00:14:50.489 "uuid": "11ecc296-1314-546f-a769-0a7620bd0d07", 00:14:50.489 "is_configured": true, 00:14:50.489 "data_offset": 2048, 00:14:50.489 "data_size": 63488 00:14:50.489 }, 00:14:50.489 { 00:14:50.489 "name": "BaseBdev4", 00:14:50.489 "uuid": "367a3b84-f3c6-5e7d-8026-4d57f6687e4e", 00:14:50.489 "is_configured": true, 00:14:50.489 "data_offset": 2048, 00:14:50.489 "data_size": 63488 00:14:50.489 } 00:14:50.489 ] 00:14:50.489 }' 00:14:50.489 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:50.489 03:17:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.749 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:50.749 03:17:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.749 03:17:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.749 [2024-10-09 03:17:33.952792] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:50.749 [2024-10-09 03:17:33.952860] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:50.749 [2024-10-09 03:17:33.952986] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:50.749 [2024-10-09 03:17:33.953079] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:50.749 [2024-10-09 03:17:33.953092] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:50.749 03:17:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.749 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:50.749 03:17:33 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:50.749 03:17:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.749 03:17:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.749 03:17:33 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.749 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:50.749 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:50.749 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:50.749 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:50.749 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:50.749 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:50.749 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:50.749 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:50.749 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:50.749 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:50.749 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:50.749 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:50.749 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:51.009 /dev/nbd0 00:14:51.009 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:51.009 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:51.009 03:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:14:51.009 03:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:51.009 03:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:51.009 03:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:51.009 03:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:14:51.009 03:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:51.009 03:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:51.009 03:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:51.009 03:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:51.009 1+0 records in 00:14:51.009 1+0 records out 00:14:51.009 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000355601 s, 11.5 MB/s 00:14:51.009 03:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:51.009 03:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:51.009 03:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:51.009 03:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:51.009 03:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:51.009 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:51.009 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:51.009 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:51.269 /dev/nbd1 00:14:51.269 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:51.269 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:51.269 03:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:14:51.269 03:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:14:51.269 03:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:14:51.269 03:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:14:51.269 03:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:14:51.269 03:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:14:51.269 03:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:14:51.269 03:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:14:51.269 03:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:51.269 1+0 records in 00:14:51.269 1+0 records out 00:14:51.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000401944 s, 10.2 MB/s 00:14:51.269 03:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:51.269 03:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:14:51.269 03:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:51.269 03:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:14:51.269 03:17:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:14:51.269 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:51.269 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:51.269 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:51.529 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:51.529 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:51.529 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:51.529 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:51.529 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:51.529 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:51.529 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:51.789 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:51.789 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:51.789 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:51.789 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:51.789 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:51.789 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:51.789 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:51.789 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:51.789 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:51.789 03:17:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:52.049 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:52.049 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:52.049 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:52.049 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:52.049 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:52.049 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:52.049 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:52.049 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:52.049 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:52.049 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:52.049 03:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.049 03:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.049 03:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.049 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:52.049 03:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.049 03:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.049 [2024-10-09 03:17:35.177127] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:52.049 [2024-10-09 03:17:35.177205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:52.049 [2024-10-09 03:17:35.177236] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:14:52.049 [2024-10-09 03:17:35.177247] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:52.049 [2024-10-09 03:17:35.180084] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:52.049 [2024-10-09 03:17:35.180134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:52.049 [2024-10-09 03:17:35.180257] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:52.049 [2024-10-09 03:17:35.180337] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:52.049 [2024-10-09 03:17:35.180519] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:52.049 [2024-10-09 03:17:35.180643] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:52.049 spare 00:14:52.049 03:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.049 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:52.049 03:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.049 03:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.049 [2024-10-09 03:17:35.280558] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:52.049 [2024-10-09 03:17:35.280603] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:52.049 [2024-10-09 03:17:35.281056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:52.049 [2024-10-09 03:17:35.281302] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:52.049 [2024-10-09 03:17:35.281325] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:52.049 [2024-10-09 03:17:35.281550] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:52.049 03:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.049 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:52.049 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.049 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.049 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.049 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.049 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:52.049 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.049 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.049 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.049 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.049 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.049 03:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.049 03:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.050 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.050 03:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.050 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.050 "name": "raid_bdev1", 00:14:52.050 "uuid": "c50bd912-57f3-464c-9f1e-af83d885bce9", 00:14:52.050 "strip_size_kb": 0, 00:14:52.050 "state": "online", 00:14:52.050 "raid_level": "raid1", 00:14:52.050 "superblock": true, 00:14:52.050 "num_base_bdevs": 4, 00:14:52.050 "num_base_bdevs_discovered": 3, 00:14:52.050 "num_base_bdevs_operational": 3, 00:14:52.050 "base_bdevs_list": [ 00:14:52.050 { 00:14:52.050 "name": "spare", 00:14:52.050 "uuid": "5e87bb0f-ce2a-55ab-9299-65a42d9604b6", 00:14:52.050 "is_configured": true, 00:14:52.050 "data_offset": 2048, 00:14:52.050 "data_size": 63488 00:14:52.050 }, 00:14:52.050 { 00:14:52.050 "name": null, 00:14:52.050 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.050 "is_configured": false, 00:14:52.050 "data_offset": 2048, 00:14:52.050 "data_size": 63488 00:14:52.050 }, 00:14:52.050 { 00:14:52.050 "name": "BaseBdev3", 00:14:52.050 "uuid": "11ecc296-1314-546f-a769-0a7620bd0d07", 00:14:52.050 "is_configured": true, 00:14:52.050 "data_offset": 2048, 00:14:52.050 "data_size": 63488 00:14:52.050 }, 00:14:52.050 { 00:14:52.050 "name": "BaseBdev4", 00:14:52.050 "uuid": "367a3b84-f3c6-5e7d-8026-4d57f6687e4e", 00:14:52.050 "is_configured": true, 00:14:52.050 "data_offset": 2048, 00:14:52.050 "data_size": 63488 00:14:52.050 } 00:14:52.050 ] 00:14:52.050 }' 00:14:52.050 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.050 03:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:52.619 "name": "raid_bdev1", 00:14:52.619 "uuid": "c50bd912-57f3-464c-9f1e-af83d885bce9", 00:14:52.619 "strip_size_kb": 0, 00:14:52.619 "state": "online", 00:14:52.619 "raid_level": "raid1", 00:14:52.619 "superblock": true, 00:14:52.619 "num_base_bdevs": 4, 00:14:52.619 "num_base_bdevs_discovered": 3, 00:14:52.619 "num_base_bdevs_operational": 3, 00:14:52.619 "base_bdevs_list": [ 00:14:52.619 { 00:14:52.619 "name": "spare", 00:14:52.619 "uuid": "5e87bb0f-ce2a-55ab-9299-65a42d9604b6", 00:14:52.619 "is_configured": true, 00:14:52.619 "data_offset": 2048, 00:14:52.619 "data_size": 63488 00:14:52.619 }, 00:14:52.619 { 00:14:52.619 "name": null, 00:14:52.619 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.619 "is_configured": false, 00:14:52.619 "data_offset": 2048, 00:14:52.619 "data_size": 63488 00:14:52.619 }, 00:14:52.619 { 00:14:52.619 "name": "BaseBdev3", 00:14:52.619 "uuid": "11ecc296-1314-546f-a769-0a7620bd0d07", 00:14:52.619 "is_configured": true, 00:14:52.619 "data_offset": 2048, 00:14:52.619 "data_size": 63488 00:14:52.619 }, 00:14:52.619 { 00:14:52.619 "name": "BaseBdev4", 00:14:52.619 "uuid": "367a3b84-f3c6-5e7d-8026-4d57f6687e4e", 00:14:52.619 "is_configured": true, 00:14:52.619 "data_offset": 2048, 00:14:52.619 "data_size": 63488 00:14:52.619 } 00:14:52.619 ] 00:14:52.619 }' 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.619 [2024-10-09 03:17:35.880857] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.619 03:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.878 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.878 "name": "raid_bdev1", 00:14:52.878 "uuid": "c50bd912-57f3-464c-9f1e-af83d885bce9", 00:14:52.878 "strip_size_kb": 0, 00:14:52.878 "state": "online", 00:14:52.878 "raid_level": "raid1", 00:14:52.878 "superblock": true, 00:14:52.878 "num_base_bdevs": 4, 00:14:52.878 "num_base_bdevs_discovered": 2, 00:14:52.878 "num_base_bdevs_operational": 2, 00:14:52.878 "base_bdevs_list": [ 00:14:52.878 { 00:14:52.878 "name": null, 00:14:52.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.878 "is_configured": false, 00:14:52.878 "data_offset": 0, 00:14:52.878 "data_size": 63488 00:14:52.878 }, 00:14:52.878 { 00:14:52.878 "name": null, 00:14:52.878 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.878 "is_configured": false, 00:14:52.878 "data_offset": 2048, 00:14:52.878 "data_size": 63488 00:14:52.878 }, 00:14:52.878 { 00:14:52.878 "name": "BaseBdev3", 00:14:52.878 "uuid": "11ecc296-1314-546f-a769-0a7620bd0d07", 00:14:52.878 "is_configured": true, 00:14:52.878 "data_offset": 2048, 00:14:52.878 "data_size": 63488 00:14:52.878 }, 00:14:52.878 { 00:14:52.878 "name": "BaseBdev4", 00:14:52.878 "uuid": "367a3b84-f3c6-5e7d-8026-4d57f6687e4e", 00:14:52.878 "is_configured": true, 00:14:52.878 "data_offset": 2048, 00:14:52.878 "data_size": 63488 00:14:52.878 } 00:14:52.878 ] 00:14:52.878 }' 00:14:52.878 03:17:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.878 03:17:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.138 03:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:53.138 03:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.138 03:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.138 [2024-10-09 03:17:36.328095] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:53.138 [2024-10-09 03:17:36.328397] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:53.138 [2024-10-09 03:17:36.328433] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:53.138 [2024-10-09 03:17:36.328486] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:53.138 [2024-10-09 03:17:36.344146] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:14:53.138 03:17:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.138 03:17:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:53.138 [2024-10-09 03:17:36.346631] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:54.077 03:17:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:54.077 03:17:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.077 03:17:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:54.077 03:17:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:54.077 03:17:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.077 03:17:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.077 03:17:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.077 03:17:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.077 03:17:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.077 03:17:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.337 03:17:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.337 "name": "raid_bdev1", 00:14:54.337 "uuid": "c50bd912-57f3-464c-9f1e-af83d885bce9", 00:14:54.337 "strip_size_kb": 0, 00:14:54.337 "state": "online", 00:14:54.337 "raid_level": "raid1", 00:14:54.337 "superblock": true, 00:14:54.337 "num_base_bdevs": 4, 00:14:54.337 "num_base_bdevs_discovered": 3, 00:14:54.337 "num_base_bdevs_operational": 3, 00:14:54.337 "process": { 00:14:54.337 "type": "rebuild", 00:14:54.337 "target": "spare", 00:14:54.337 "progress": { 00:14:54.337 "blocks": 20480, 00:14:54.337 "percent": 32 00:14:54.337 } 00:14:54.337 }, 00:14:54.337 "base_bdevs_list": [ 00:14:54.337 { 00:14:54.337 "name": "spare", 00:14:54.337 "uuid": "5e87bb0f-ce2a-55ab-9299-65a42d9604b6", 00:14:54.337 "is_configured": true, 00:14:54.337 "data_offset": 2048, 00:14:54.337 "data_size": 63488 00:14:54.337 }, 00:14:54.337 { 00:14:54.337 "name": null, 00:14:54.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.337 "is_configured": false, 00:14:54.337 "data_offset": 2048, 00:14:54.337 "data_size": 63488 00:14:54.337 }, 00:14:54.337 { 00:14:54.337 "name": "BaseBdev3", 00:14:54.337 "uuid": "11ecc296-1314-546f-a769-0a7620bd0d07", 00:14:54.337 "is_configured": true, 00:14:54.337 "data_offset": 2048, 00:14:54.337 "data_size": 63488 00:14:54.337 }, 00:14:54.337 { 00:14:54.337 "name": "BaseBdev4", 00:14:54.337 "uuid": "367a3b84-f3c6-5e7d-8026-4d57f6687e4e", 00:14:54.337 "is_configured": true, 00:14:54.337 "data_offset": 2048, 00:14:54.337 "data_size": 63488 00:14:54.337 } 00:14:54.337 ] 00:14:54.337 }' 00:14:54.337 03:17:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.337 03:17:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:54.337 03:17:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.337 03:17:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:54.337 03:17:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:54.337 03:17:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.337 03:17:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.337 [2024-10-09 03:17:37.503280] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:54.337 [2024-10-09 03:17:37.557054] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:54.337 [2024-10-09 03:17:37.557129] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:54.337 [2024-10-09 03:17:37.557150] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:54.337 [2024-10-09 03:17:37.557158] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:54.337 03:17:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.337 03:17:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:54.337 03:17:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:54.337 03:17:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.337 03:17:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:54.337 03:17:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:54.337 03:17:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:54.337 03:17:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.337 03:17:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.337 03:17:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.337 03:17:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.337 03:17:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.337 03:17:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.337 03:17:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.337 03:17:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.337 03:17:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.337 03:17:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.337 "name": "raid_bdev1", 00:14:54.337 "uuid": "c50bd912-57f3-464c-9f1e-af83d885bce9", 00:14:54.337 "strip_size_kb": 0, 00:14:54.337 "state": "online", 00:14:54.337 "raid_level": "raid1", 00:14:54.337 "superblock": true, 00:14:54.337 "num_base_bdevs": 4, 00:14:54.337 "num_base_bdevs_discovered": 2, 00:14:54.337 "num_base_bdevs_operational": 2, 00:14:54.337 "base_bdevs_list": [ 00:14:54.337 { 00:14:54.337 "name": null, 00:14:54.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.337 "is_configured": false, 00:14:54.337 "data_offset": 0, 00:14:54.337 "data_size": 63488 00:14:54.337 }, 00:14:54.337 { 00:14:54.337 "name": null, 00:14:54.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.337 "is_configured": false, 00:14:54.337 "data_offset": 2048, 00:14:54.337 "data_size": 63488 00:14:54.337 }, 00:14:54.337 { 00:14:54.337 "name": "BaseBdev3", 00:14:54.337 "uuid": "11ecc296-1314-546f-a769-0a7620bd0d07", 00:14:54.337 "is_configured": true, 00:14:54.337 "data_offset": 2048, 00:14:54.337 "data_size": 63488 00:14:54.337 }, 00:14:54.337 { 00:14:54.337 "name": "BaseBdev4", 00:14:54.337 "uuid": "367a3b84-f3c6-5e7d-8026-4d57f6687e4e", 00:14:54.337 "is_configured": true, 00:14:54.337 "data_offset": 2048, 00:14:54.337 "data_size": 63488 00:14:54.337 } 00:14:54.337 ] 00:14:54.337 }' 00:14:54.337 03:17:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.337 03:17:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.907 03:17:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:54.907 03:17:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.907 03:17:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.907 [2024-10-09 03:17:38.042483] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:54.907 [2024-10-09 03:17:38.042576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:54.907 [2024-10-09 03:17:38.042615] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:14:54.907 [2024-10-09 03:17:38.042627] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:54.907 [2024-10-09 03:17:38.043311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:54.907 [2024-10-09 03:17:38.043342] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:54.907 [2024-10-09 03:17:38.043467] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:54.907 [2024-10-09 03:17:38.043490] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:14:54.907 [2024-10-09 03:17:38.043511] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:54.907 [2024-10-09 03:17:38.043545] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:54.907 [2024-10-09 03:17:38.059435] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:14:54.907 spare 00:14:54.907 03:17:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.907 03:17:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:54.907 [2024-10-09 03:17:38.061814] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:55.844 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:55.844 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.844 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:55.844 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:55.844 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.844 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.844 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.844 03:17:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.844 03:17:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:55.844 03:17:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.844 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.844 "name": "raid_bdev1", 00:14:55.844 "uuid": "c50bd912-57f3-464c-9f1e-af83d885bce9", 00:14:55.844 "strip_size_kb": 0, 00:14:55.844 "state": "online", 00:14:55.844 "raid_level": "raid1", 00:14:55.844 "superblock": true, 00:14:55.844 "num_base_bdevs": 4, 00:14:55.844 "num_base_bdevs_discovered": 3, 00:14:55.844 "num_base_bdevs_operational": 3, 00:14:55.844 "process": { 00:14:55.844 "type": "rebuild", 00:14:55.844 "target": "spare", 00:14:55.844 "progress": { 00:14:55.844 "blocks": 20480, 00:14:55.844 "percent": 32 00:14:55.844 } 00:14:55.844 }, 00:14:55.844 "base_bdevs_list": [ 00:14:55.844 { 00:14:55.844 "name": "spare", 00:14:55.844 "uuid": "5e87bb0f-ce2a-55ab-9299-65a42d9604b6", 00:14:55.844 "is_configured": true, 00:14:55.844 "data_offset": 2048, 00:14:55.844 "data_size": 63488 00:14:55.844 }, 00:14:55.844 { 00:14:55.844 "name": null, 00:14:55.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:55.844 "is_configured": false, 00:14:55.844 "data_offset": 2048, 00:14:55.844 "data_size": 63488 00:14:55.844 }, 00:14:55.844 { 00:14:55.844 "name": "BaseBdev3", 00:14:55.844 "uuid": "11ecc296-1314-546f-a769-0a7620bd0d07", 00:14:55.844 "is_configured": true, 00:14:55.844 "data_offset": 2048, 00:14:55.844 "data_size": 63488 00:14:55.844 }, 00:14:55.844 { 00:14:55.844 "name": "BaseBdev4", 00:14:55.844 "uuid": "367a3b84-f3c6-5e7d-8026-4d57f6687e4e", 00:14:55.844 "is_configured": true, 00:14:55.844 "data_offset": 2048, 00:14:55.844 "data_size": 63488 00:14:55.844 } 00:14:55.844 ] 00:14:55.844 }' 00:14:55.844 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.105 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:56.105 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.105 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:56.105 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:56.105 03:17:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.105 03:17:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.105 [2024-10-09 03:17:39.217619] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:56.105 [2024-10-09 03:17:39.272321] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:56.105 [2024-10-09 03:17:39.272395] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:56.105 [2024-10-09 03:17:39.272414] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:56.105 [2024-10-09 03:17:39.272425] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:56.105 03:17:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.105 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:56.105 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:56.105 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:56.105 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:56.105 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:56.105 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:56.105 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:56.105 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:56.105 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:56.105 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:56.105 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.105 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.105 03:17:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.105 03:17:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.105 03:17:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.105 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:56.105 "name": "raid_bdev1", 00:14:56.105 "uuid": "c50bd912-57f3-464c-9f1e-af83d885bce9", 00:14:56.105 "strip_size_kb": 0, 00:14:56.105 "state": "online", 00:14:56.105 "raid_level": "raid1", 00:14:56.105 "superblock": true, 00:14:56.105 "num_base_bdevs": 4, 00:14:56.105 "num_base_bdevs_discovered": 2, 00:14:56.105 "num_base_bdevs_operational": 2, 00:14:56.105 "base_bdevs_list": [ 00:14:56.105 { 00:14:56.105 "name": null, 00:14:56.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.105 "is_configured": false, 00:14:56.105 "data_offset": 0, 00:14:56.105 "data_size": 63488 00:14:56.105 }, 00:14:56.105 { 00:14:56.105 "name": null, 00:14:56.105 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.105 "is_configured": false, 00:14:56.105 "data_offset": 2048, 00:14:56.105 "data_size": 63488 00:14:56.105 }, 00:14:56.105 { 00:14:56.105 "name": "BaseBdev3", 00:14:56.105 "uuid": "11ecc296-1314-546f-a769-0a7620bd0d07", 00:14:56.105 "is_configured": true, 00:14:56.105 "data_offset": 2048, 00:14:56.105 "data_size": 63488 00:14:56.105 }, 00:14:56.105 { 00:14:56.105 "name": "BaseBdev4", 00:14:56.105 "uuid": "367a3b84-f3c6-5e7d-8026-4d57f6687e4e", 00:14:56.105 "is_configured": true, 00:14:56.105 "data_offset": 2048, 00:14:56.105 "data_size": 63488 00:14:56.105 } 00:14:56.105 ] 00:14:56.105 }' 00:14:56.105 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:56.105 03:17:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.675 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:56.675 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.675 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:56.675 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:56.675 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.675 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.675 03:17:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.675 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.675 03:17:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.675 03:17:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.675 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.675 "name": "raid_bdev1", 00:14:56.675 "uuid": "c50bd912-57f3-464c-9f1e-af83d885bce9", 00:14:56.675 "strip_size_kb": 0, 00:14:56.675 "state": "online", 00:14:56.675 "raid_level": "raid1", 00:14:56.675 "superblock": true, 00:14:56.675 "num_base_bdevs": 4, 00:14:56.675 "num_base_bdevs_discovered": 2, 00:14:56.675 "num_base_bdevs_operational": 2, 00:14:56.675 "base_bdevs_list": [ 00:14:56.675 { 00:14:56.675 "name": null, 00:14:56.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.675 "is_configured": false, 00:14:56.675 "data_offset": 0, 00:14:56.675 "data_size": 63488 00:14:56.675 }, 00:14:56.675 { 00:14:56.675 "name": null, 00:14:56.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:56.675 "is_configured": false, 00:14:56.675 "data_offset": 2048, 00:14:56.675 "data_size": 63488 00:14:56.675 }, 00:14:56.675 { 00:14:56.675 "name": "BaseBdev3", 00:14:56.675 "uuid": "11ecc296-1314-546f-a769-0a7620bd0d07", 00:14:56.675 "is_configured": true, 00:14:56.675 "data_offset": 2048, 00:14:56.675 "data_size": 63488 00:14:56.675 }, 00:14:56.675 { 00:14:56.675 "name": "BaseBdev4", 00:14:56.675 "uuid": "367a3b84-f3c6-5e7d-8026-4d57f6687e4e", 00:14:56.675 "is_configured": true, 00:14:56.675 "data_offset": 2048, 00:14:56.675 "data_size": 63488 00:14:56.675 } 00:14:56.675 ] 00:14:56.675 }' 00:14:56.675 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.675 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:56.675 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.675 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:56.675 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:56.675 03:17:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.675 03:17:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.675 03:17:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.675 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:56.675 03:17:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.675 03:17:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:56.675 [2024-10-09 03:17:39.851517] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:56.675 [2024-10-09 03:17:39.851601] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:56.675 [2024-10-09 03:17:39.851625] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:14:56.675 [2024-10-09 03:17:39.851637] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:56.675 [2024-10-09 03:17:39.852205] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:56.675 [2024-10-09 03:17:39.852236] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:56.675 [2024-10-09 03:17:39.852326] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:56.675 [2024-10-09 03:17:39.852352] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:56.675 [2024-10-09 03:17:39.852361] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:56.675 [2024-10-09 03:17:39.852383] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:56.675 BaseBdev1 00:14:56.675 03:17:39 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.675 03:17:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:57.615 03:17:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:57.615 03:17:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:57.615 03:17:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:57.615 03:17:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:57.615 03:17:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:57.615 03:17:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:57.615 03:17:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:57.615 03:17:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:57.615 03:17:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:57.615 03:17:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:57.615 03:17:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.615 03:17:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.615 03:17:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.615 03:17:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.615 03:17:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.615 03:17:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:57.615 "name": "raid_bdev1", 00:14:57.615 "uuid": "c50bd912-57f3-464c-9f1e-af83d885bce9", 00:14:57.615 "strip_size_kb": 0, 00:14:57.615 "state": "online", 00:14:57.615 "raid_level": "raid1", 00:14:57.615 "superblock": true, 00:14:57.615 "num_base_bdevs": 4, 00:14:57.615 "num_base_bdevs_discovered": 2, 00:14:57.615 "num_base_bdevs_operational": 2, 00:14:57.615 "base_bdevs_list": [ 00:14:57.615 { 00:14:57.615 "name": null, 00:14:57.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.615 "is_configured": false, 00:14:57.615 "data_offset": 0, 00:14:57.615 "data_size": 63488 00:14:57.615 }, 00:14:57.615 { 00:14:57.615 "name": null, 00:14:57.615 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:57.615 "is_configured": false, 00:14:57.615 "data_offset": 2048, 00:14:57.615 "data_size": 63488 00:14:57.615 }, 00:14:57.615 { 00:14:57.615 "name": "BaseBdev3", 00:14:57.615 "uuid": "11ecc296-1314-546f-a769-0a7620bd0d07", 00:14:57.615 "is_configured": true, 00:14:57.615 "data_offset": 2048, 00:14:57.615 "data_size": 63488 00:14:57.615 }, 00:14:57.615 { 00:14:57.615 "name": "BaseBdev4", 00:14:57.615 "uuid": "367a3b84-f3c6-5e7d-8026-4d57f6687e4e", 00:14:57.615 "is_configured": true, 00:14:57.615 "data_offset": 2048, 00:14:57.615 "data_size": 63488 00:14:57.615 } 00:14:57.615 ] 00:14:57.615 }' 00:14:57.615 03:17:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:57.615 03:17:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.185 03:17:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:58.185 03:17:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.185 03:17:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:58.185 03:17:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:58.185 03:17:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.185 03:17:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.185 03:17:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.185 03:17:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.185 03:17:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.185 03:17:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:58.185 03:17:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.185 "name": "raid_bdev1", 00:14:58.185 "uuid": "c50bd912-57f3-464c-9f1e-af83d885bce9", 00:14:58.186 "strip_size_kb": 0, 00:14:58.186 "state": "online", 00:14:58.186 "raid_level": "raid1", 00:14:58.186 "superblock": true, 00:14:58.186 "num_base_bdevs": 4, 00:14:58.186 "num_base_bdevs_discovered": 2, 00:14:58.186 "num_base_bdevs_operational": 2, 00:14:58.186 "base_bdevs_list": [ 00:14:58.186 { 00:14:58.186 "name": null, 00:14:58.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.186 "is_configured": false, 00:14:58.186 "data_offset": 0, 00:14:58.186 "data_size": 63488 00:14:58.186 }, 00:14:58.186 { 00:14:58.186 "name": null, 00:14:58.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:58.186 "is_configured": false, 00:14:58.186 "data_offset": 2048, 00:14:58.186 "data_size": 63488 00:14:58.186 }, 00:14:58.186 { 00:14:58.186 "name": "BaseBdev3", 00:14:58.186 "uuid": "11ecc296-1314-546f-a769-0a7620bd0d07", 00:14:58.186 "is_configured": true, 00:14:58.186 "data_offset": 2048, 00:14:58.186 "data_size": 63488 00:14:58.186 }, 00:14:58.186 { 00:14:58.186 "name": "BaseBdev4", 00:14:58.186 "uuid": "367a3b84-f3c6-5e7d-8026-4d57f6687e4e", 00:14:58.186 "is_configured": true, 00:14:58.186 "data_offset": 2048, 00:14:58.186 "data_size": 63488 00:14:58.186 } 00:14:58.186 ] 00:14:58.186 }' 00:14:58.186 03:17:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.186 03:17:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:58.186 03:17:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.186 03:17:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:58.186 03:17:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:58.186 03:17:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:14:58.186 03:17:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:58.186 03:17:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:14:58.186 03:17:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:58.186 03:17:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:14:58.186 03:17:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:58.186 03:17:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:58.186 03:17:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:58.186 03:17:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:58.186 [2024-10-09 03:17:41.424963] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:58.186 [2024-10-09 03:17:41.425236] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:14:58.186 [2024-10-09 03:17:41.425256] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:58.186 request: 00:14:58.186 { 00:14:58.186 "base_bdev": "BaseBdev1", 00:14:58.186 "raid_bdev": "raid_bdev1", 00:14:58.186 "method": "bdev_raid_add_base_bdev", 00:14:58.186 "req_id": 1 00:14:58.186 } 00:14:58.186 Got JSON-RPC error response 00:14:58.186 response: 00:14:58.186 { 00:14:58.186 "code": -22, 00:14:58.186 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:58.186 } 00:14:58.186 03:17:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:14:58.186 03:17:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:14:58.186 03:17:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:58.186 03:17:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:58.186 03:17:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:58.186 03:17:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:59.567 03:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:59.567 03:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.567 03:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.567 03:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.567 03:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.567 03:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:59.567 03:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.567 03:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.567 03:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.567 03:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.567 03:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.567 03:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.567 03:17:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.567 03:17:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.567 03:17:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.567 03:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.567 "name": "raid_bdev1", 00:14:59.567 "uuid": "c50bd912-57f3-464c-9f1e-af83d885bce9", 00:14:59.567 "strip_size_kb": 0, 00:14:59.567 "state": "online", 00:14:59.567 "raid_level": "raid1", 00:14:59.567 "superblock": true, 00:14:59.567 "num_base_bdevs": 4, 00:14:59.567 "num_base_bdevs_discovered": 2, 00:14:59.567 "num_base_bdevs_operational": 2, 00:14:59.567 "base_bdevs_list": [ 00:14:59.567 { 00:14:59.567 "name": null, 00:14:59.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.567 "is_configured": false, 00:14:59.567 "data_offset": 0, 00:14:59.567 "data_size": 63488 00:14:59.567 }, 00:14:59.567 { 00:14:59.567 "name": null, 00:14:59.567 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.567 "is_configured": false, 00:14:59.567 "data_offset": 2048, 00:14:59.567 "data_size": 63488 00:14:59.567 }, 00:14:59.567 { 00:14:59.567 "name": "BaseBdev3", 00:14:59.567 "uuid": "11ecc296-1314-546f-a769-0a7620bd0d07", 00:14:59.567 "is_configured": true, 00:14:59.567 "data_offset": 2048, 00:14:59.567 "data_size": 63488 00:14:59.567 }, 00:14:59.567 { 00:14:59.567 "name": "BaseBdev4", 00:14:59.567 "uuid": "367a3b84-f3c6-5e7d-8026-4d57f6687e4e", 00:14:59.567 "is_configured": true, 00:14:59.567 "data_offset": 2048, 00:14:59.567 "data_size": 63488 00:14:59.567 } 00:14:59.567 ] 00:14:59.568 }' 00:14:59.568 03:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.568 03:17:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.828 03:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:59.828 03:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:59.828 03:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:59.828 03:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:59.828 03:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:59.828 03:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.828 03:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.828 03:17:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.828 03:17:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:59.828 03:17:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.828 03:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:59.828 "name": "raid_bdev1", 00:14:59.828 "uuid": "c50bd912-57f3-464c-9f1e-af83d885bce9", 00:14:59.828 "strip_size_kb": 0, 00:14:59.828 "state": "online", 00:14:59.828 "raid_level": "raid1", 00:14:59.828 "superblock": true, 00:14:59.828 "num_base_bdevs": 4, 00:14:59.828 "num_base_bdevs_discovered": 2, 00:14:59.828 "num_base_bdevs_operational": 2, 00:14:59.828 "base_bdevs_list": [ 00:14:59.828 { 00:14:59.828 "name": null, 00:14:59.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.828 "is_configured": false, 00:14:59.828 "data_offset": 0, 00:14:59.828 "data_size": 63488 00:14:59.828 }, 00:14:59.828 { 00:14:59.828 "name": null, 00:14:59.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:59.828 "is_configured": false, 00:14:59.828 "data_offset": 2048, 00:14:59.828 "data_size": 63488 00:14:59.828 }, 00:14:59.828 { 00:14:59.828 "name": "BaseBdev3", 00:14:59.828 "uuid": "11ecc296-1314-546f-a769-0a7620bd0d07", 00:14:59.828 "is_configured": true, 00:14:59.828 "data_offset": 2048, 00:14:59.828 "data_size": 63488 00:14:59.828 }, 00:14:59.828 { 00:14:59.828 "name": "BaseBdev4", 00:14:59.828 "uuid": "367a3b84-f3c6-5e7d-8026-4d57f6687e4e", 00:14:59.828 "is_configured": true, 00:14:59.828 "data_offset": 2048, 00:14:59.828 "data_size": 63488 00:14:59.828 } 00:14:59.828 ] 00:14:59.828 }' 00:14:59.828 03:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:59.828 03:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:59.828 03:17:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.828 03:17:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:59.828 03:17:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78238 00:14:59.828 03:17:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 78238 ']' 00:14:59.828 03:17:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 78238 00:14:59.828 03:17:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:14:59.828 03:17:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:59.828 03:17:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78238 00:14:59.828 03:17:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:59.828 03:17:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:59.828 killing process with pid 78238 00:14:59.828 03:17:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78238' 00:14:59.828 03:17:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 78238 00:14:59.828 Received shutdown signal, test time was about 60.000000 seconds 00:14:59.828 00:14:59.828 Latency(us) 00:14:59.828 [2024-10-09T03:17:43.131Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:59.828 [2024-10-09T03:17:43.131Z] =================================================================================================================== 00:14:59.828 [2024-10-09T03:17:43.131Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:59.828 [2024-10-09 03:17:43.044596] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:59.828 03:17:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 78238 00:14:59.828 [2024-10-09 03:17:43.044748] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:59.828 [2024-10-09 03:17:43.044829] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:59.828 [2024-10-09 03:17:43.044851] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:00.398 [2024-10-09 03:17:43.518054] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:01.780 03:17:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:15:01.780 00:15:01.780 real 0m25.741s 00:15:01.780 user 0m30.197s 00:15:01.780 sys 0m4.006s 00:15:01.780 03:17:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:01.780 03:17:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:01.780 ************************************ 00:15:01.780 END TEST raid_rebuild_test_sb 00:15:01.780 ************************************ 00:15:01.780 03:17:44 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:15:01.780 03:17:44 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:01.780 03:17:44 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:01.780 03:17:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:01.780 ************************************ 00:15:01.780 START TEST raid_rebuild_test_io 00:15:01.780 ************************************ 00:15:01.780 03:17:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 false true true 00:15:01.780 03:17:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:01.780 03:17:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:01.780 03:17:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:01.780 03:17:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:01.780 03:17:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:01.780 03:17:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:01.780 03:17:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:01.780 03:17:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:01.780 03:17:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:01.780 03:17:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:01.780 03:17:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:01.780 03:17:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:01.780 03:17:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:01.781 03:17:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:01.781 03:17:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:01.781 03:17:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:01.781 03:17:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:01.781 03:17:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:01.781 03:17:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:01.781 03:17:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:01.781 03:17:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:01.781 03:17:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:01.781 03:17:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:01.781 03:17:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:01.781 03:17:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:01.781 03:17:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:01.781 03:17:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:01.781 03:17:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:01.781 03:17:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:01.781 03:17:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78997 00:15:01.781 03:17:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78997 00:15:01.781 03:17:44 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:01.781 03:17:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@831 -- # '[' -z 78997 ']' 00:15:01.781 03:17:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:01.781 03:17:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:01.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:01.781 03:17:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:01.781 03:17:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:01.781 03:17:44 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:01.781 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:01.781 Zero copy mechanism will not be used. 00:15:01.781 [2024-10-09 03:17:44.922716] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:15:01.781 [2024-10-09 03:17:44.922844] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78997 ] 00:15:01.781 [2024-10-09 03:17:45.082671] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.041 [2024-10-09 03:17:45.281725] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:02.301 [2024-10-09 03:17:45.538328] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:02.301 [2024-10-09 03:17:45.538381] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:02.561 03:17:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:02.561 03:17:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # return 0 00:15:02.561 03:17:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:02.561 03:17:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:02.561 03:17:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.561 03:17:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.561 BaseBdev1_malloc 00:15:02.561 03:17:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.561 03:17:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:02.561 03:17:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.561 03:17:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.561 [2024-10-09 03:17:45.823342] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:02.561 [2024-10-09 03:17:45.823424] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.561 [2024-10-09 03:17:45.823450] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:02.561 [2024-10-09 03:17:45.823468] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.561 [2024-10-09 03:17:45.826026] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.561 [2024-10-09 03:17:45.826067] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:02.561 BaseBdev1 00:15:02.561 03:17:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.561 03:17:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:02.561 03:17:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:02.561 03:17:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.561 03:17:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.819 BaseBdev2_malloc 00:15:02.819 03:17:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.819 03:17:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:02.819 03:17:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.819 03:17:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.819 [2024-10-09 03:17:45.907490] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:02.819 [2024-10-09 03:17:45.907564] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.819 [2024-10-09 03:17:45.907586] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:02.819 [2024-10-09 03:17:45.907601] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.819 [2024-10-09 03:17:45.910164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.819 [2024-10-09 03:17:45.910205] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:02.819 BaseBdev2 00:15:02.819 03:17:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.819 03:17:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:02.819 03:17:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:02.819 03:17:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.819 03:17:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.819 BaseBdev3_malloc 00:15:02.819 03:17:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.819 03:17:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:02.819 03:17:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.819 03:17:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.819 [2024-10-09 03:17:45.971276] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:02.819 [2024-10-09 03:17:45.971338] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.819 [2024-10-09 03:17:45.971367] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:02.819 [2024-10-09 03:17:45.971380] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.819 [2024-10-09 03:17:45.973889] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.819 [2024-10-09 03:17:45.973927] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:02.819 BaseBdev3 00:15:02.819 03:17:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.819 03:17:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:02.819 03:17:45 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:02.819 03:17:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.819 03:17:45 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.819 BaseBdev4_malloc 00:15:02.819 03:17:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.819 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:02.819 03:17:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.819 03:17:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.819 [2024-10-09 03:17:46.036943] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:02.820 [2024-10-09 03:17:46.037014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.820 [2024-10-09 03:17:46.037055] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:02.820 [2024-10-09 03:17:46.037069] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.820 [2024-10-09 03:17:46.039575] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.820 [2024-10-09 03:17:46.039615] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:02.820 BaseBdev4 00:15:02.820 03:17:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.820 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:02.820 03:17:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.820 03:17:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.820 spare_malloc 00:15:02.820 03:17:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.820 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:02.820 03:17:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.820 03:17:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.820 spare_delay 00:15:02.820 03:17:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.820 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:02.820 03:17:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.820 03:17:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:02.820 [2024-10-09 03:17:46.118148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:02.820 [2024-10-09 03:17:46.118242] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:02.820 [2024-10-09 03:17:46.118269] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:02.820 [2024-10-09 03:17:46.118283] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:02.820 [2024-10-09 03:17:46.121113] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:02.820 [2024-10-09 03:17:46.121156] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:03.079 spare 00:15:03.079 03:17:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.079 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:03.079 03:17:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.079 03:17:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.079 [2024-10-09 03:17:46.130211] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:03.079 [2024-10-09 03:17:46.132560] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:03.079 [2024-10-09 03:17:46.132648] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:03.079 [2024-10-09 03:17:46.132712] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:03.079 [2024-10-09 03:17:46.132819] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:03.079 [2024-10-09 03:17:46.132856] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:03.079 [2024-10-09 03:17:46.133172] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:03.079 [2024-10-09 03:17:46.133406] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:03.079 [2024-10-09 03:17:46.133426] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:03.079 [2024-10-09 03:17:46.133628] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:03.079 03:17:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.079 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:03.079 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.079 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.079 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:03.079 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:03.079 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:03.079 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.079 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.079 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.079 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.079 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.079 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.079 03:17:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.079 03:17:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.079 03:17:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.079 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.079 "name": "raid_bdev1", 00:15:03.079 "uuid": "4ddf9b8c-531b-4b62-99cf-8100434b0b4a", 00:15:03.079 "strip_size_kb": 0, 00:15:03.079 "state": "online", 00:15:03.079 "raid_level": "raid1", 00:15:03.079 "superblock": false, 00:15:03.079 "num_base_bdevs": 4, 00:15:03.079 "num_base_bdevs_discovered": 4, 00:15:03.079 "num_base_bdevs_operational": 4, 00:15:03.079 "base_bdevs_list": [ 00:15:03.079 { 00:15:03.079 "name": "BaseBdev1", 00:15:03.079 "uuid": "1692d84d-0db8-5e3c-821f-f46aeb24536b", 00:15:03.079 "is_configured": true, 00:15:03.079 "data_offset": 0, 00:15:03.079 "data_size": 65536 00:15:03.079 }, 00:15:03.079 { 00:15:03.079 "name": "BaseBdev2", 00:15:03.079 "uuid": "01013e68-c6a0-5d3b-ae6e-11e20e50db17", 00:15:03.079 "is_configured": true, 00:15:03.079 "data_offset": 0, 00:15:03.079 "data_size": 65536 00:15:03.079 }, 00:15:03.079 { 00:15:03.079 "name": "BaseBdev3", 00:15:03.079 "uuid": "6171797f-8ece-5ecb-91e5-331e4201aeef", 00:15:03.079 "is_configured": true, 00:15:03.079 "data_offset": 0, 00:15:03.079 "data_size": 65536 00:15:03.079 }, 00:15:03.079 { 00:15:03.079 "name": "BaseBdev4", 00:15:03.079 "uuid": "8646de3f-ae52-51c7-b4d2-ffeb794ecfaf", 00:15:03.079 "is_configured": true, 00:15:03.079 "data_offset": 0, 00:15:03.079 "data_size": 65536 00:15:03.079 } 00:15:03.079 ] 00:15:03.079 }' 00:15:03.079 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.079 03:17:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.339 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:03.339 03:17:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.339 03:17:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.339 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:03.339 [2024-10-09 03:17:46.593863] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:03.339 03:17:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.339 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:03.339 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.339 03:17:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.339 03:17:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.339 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:03.599 03:17:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.599 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:03.599 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:03.599 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:03.599 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:03.599 03:17:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.599 03:17:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.599 [2024-10-09 03:17:46.709209] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:03.599 03:17:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.599 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:03.599 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:03.599 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:03.599 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:03.599 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:03.599 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:03.599 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:03.599 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:03.599 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:03.599 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:03.599 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:03.599 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:03.599 03:17:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.599 03:17:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.599 03:17:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.599 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:03.599 "name": "raid_bdev1", 00:15:03.599 "uuid": "4ddf9b8c-531b-4b62-99cf-8100434b0b4a", 00:15:03.599 "strip_size_kb": 0, 00:15:03.599 "state": "online", 00:15:03.599 "raid_level": "raid1", 00:15:03.599 "superblock": false, 00:15:03.599 "num_base_bdevs": 4, 00:15:03.599 "num_base_bdevs_discovered": 3, 00:15:03.599 "num_base_bdevs_operational": 3, 00:15:03.599 "base_bdevs_list": [ 00:15:03.599 { 00:15:03.599 "name": null, 00:15:03.599 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:03.599 "is_configured": false, 00:15:03.599 "data_offset": 0, 00:15:03.599 "data_size": 65536 00:15:03.599 }, 00:15:03.599 { 00:15:03.599 "name": "BaseBdev2", 00:15:03.599 "uuid": "01013e68-c6a0-5d3b-ae6e-11e20e50db17", 00:15:03.599 "is_configured": true, 00:15:03.599 "data_offset": 0, 00:15:03.599 "data_size": 65536 00:15:03.599 }, 00:15:03.599 { 00:15:03.599 "name": "BaseBdev3", 00:15:03.599 "uuid": "6171797f-8ece-5ecb-91e5-331e4201aeef", 00:15:03.599 "is_configured": true, 00:15:03.599 "data_offset": 0, 00:15:03.599 "data_size": 65536 00:15:03.599 }, 00:15:03.599 { 00:15:03.599 "name": "BaseBdev4", 00:15:03.599 "uuid": "8646de3f-ae52-51c7-b4d2-ffeb794ecfaf", 00:15:03.599 "is_configured": true, 00:15:03.599 "data_offset": 0, 00:15:03.599 "data_size": 65536 00:15:03.599 } 00:15:03.599 ] 00:15:03.599 }' 00:15:03.599 03:17:46 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:03.599 03:17:46 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.599 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:03.599 Zero copy mechanism will not be used. 00:15:03.599 Running I/O for 60 seconds... 00:15:03.599 [2024-10-09 03:17:46.806820] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:04.169 03:17:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:04.169 03:17:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.169 03:17:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.169 [2024-10-09 03:17:47.175149] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:04.169 03:17:47 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.169 03:17:47 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:04.169 [2024-10-09 03:17:47.217202] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:04.169 [2024-10-09 03:17:47.219749] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:04.428 [2024-10-09 03:17:47.491669] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:04.428 [2024-10-09 03:17:47.492217] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:04.688 184.00 IOPS, 552.00 MiB/s [2024-10-09T03:17:47.991Z] [2024-10-09 03:17:47.859478] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:04.947 [2024-10-09 03:17:48.076901] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:04.947 03:17:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:04.947 03:17:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:04.947 03:17:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:04.947 03:17:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:04.947 03:17:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:04.947 03:17:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.947 03:17:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.947 03:17:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.947 03:17:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.947 03:17:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.207 03:17:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.207 "name": "raid_bdev1", 00:15:05.207 "uuid": "4ddf9b8c-531b-4b62-99cf-8100434b0b4a", 00:15:05.207 "strip_size_kb": 0, 00:15:05.207 "state": "online", 00:15:05.207 "raid_level": "raid1", 00:15:05.207 "superblock": false, 00:15:05.207 "num_base_bdevs": 4, 00:15:05.207 "num_base_bdevs_discovered": 4, 00:15:05.207 "num_base_bdevs_operational": 4, 00:15:05.207 "process": { 00:15:05.207 "type": "rebuild", 00:15:05.207 "target": "spare", 00:15:05.207 "progress": { 00:15:05.207 "blocks": 12288, 00:15:05.207 "percent": 18 00:15:05.207 } 00:15:05.207 }, 00:15:05.207 "base_bdevs_list": [ 00:15:05.207 { 00:15:05.207 "name": "spare", 00:15:05.207 "uuid": "997a8071-5e88-5acd-b5a9-a4b582fa68c5", 00:15:05.207 "is_configured": true, 00:15:05.207 "data_offset": 0, 00:15:05.207 "data_size": 65536 00:15:05.207 }, 00:15:05.207 { 00:15:05.207 "name": "BaseBdev2", 00:15:05.207 "uuid": "01013e68-c6a0-5d3b-ae6e-11e20e50db17", 00:15:05.207 "is_configured": true, 00:15:05.207 "data_offset": 0, 00:15:05.207 "data_size": 65536 00:15:05.207 }, 00:15:05.207 { 00:15:05.207 "name": "BaseBdev3", 00:15:05.207 "uuid": "6171797f-8ece-5ecb-91e5-331e4201aeef", 00:15:05.207 "is_configured": true, 00:15:05.207 "data_offset": 0, 00:15:05.207 "data_size": 65536 00:15:05.207 }, 00:15:05.207 { 00:15:05.207 "name": "BaseBdev4", 00:15:05.207 "uuid": "8646de3f-ae52-51c7-b4d2-ffeb794ecfaf", 00:15:05.207 "is_configured": true, 00:15:05.207 "data_offset": 0, 00:15:05.207 "data_size": 65536 00:15:05.207 } 00:15:05.207 ] 00:15:05.207 }' 00:15:05.207 03:17:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.207 03:17:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:05.207 03:17:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.207 03:17:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:05.207 03:17:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:05.207 03:17:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.207 03:17:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.207 [2024-10-09 03:17:48.364749] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:05.207 [2024-10-09 03:17:48.473180] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:05.207 [2024-10-09 03:17:48.474497] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:05.467 [2024-10-09 03:17:48.585225] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:05.467 [2024-10-09 03:17:48.600281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:05.467 [2024-10-09 03:17:48.600350] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:05.467 [2024-10-09 03:17:48.600372] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:05.467 [2024-10-09 03:17:48.656027] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:15:05.467 03:17:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.467 03:17:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:05.467 03:17:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.467 03:17:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.467 03:17:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:05.467 03:17:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:05.467 03:17:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:05.467 03:17:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.467 03:17:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.467 03:17:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.467 03:17:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.467 03:17:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.467 03:17:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.467 03:17:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.467 03:17:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.467 03:17:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.467 03:17:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.467 "name": "raid_bdev1", 00:15:05.467 "uuid": "4ddf9b8c-531b-4b62-99cf-8100434b0b4a", 00:15:05.467 "strip_size_kb": 0, 00:15:05.467 "state": "online", 00:15:05.467 "raid_level": "raid1", 00:15:05.467 "superblock": false, 00:15:05.467 "num_base_bdevs": 4, 00:15:05.467 "num_base_bdevs_discovered": 3, 00:15:05.467 "num_base_bdevs_operational": 3, 00:15:05.467 "base_bdevs_list": [ 00:15:05.467 { 00:15:05.467 "name": null, 00:15:05.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.467 "is_configured": false, 00:15:05.467 "data_offset": 0, 00:15:05.467 "data_size": 65536 00:15:05.467 }, 00:15:05.467 { 00:15:05.467 "name": "BaseBdev2", 00:15:05.467 "uuid": "01013e68-c6a0-5d3b-ae6e-11e20e50db17", 00:15:05.467 "is_configured": true, 00:15:05.467 "data_offset": 0, 00:15:05.467 "data_size": 65536 00:15:05.467 }, 00:15:05.467 { 00:15:05.467 "name": "BaseBdev3", 00:15:05.467 "uuid": "6171797f-8ece-5ecb-91e5-331e4201aeef", 00:15:05.467 "is_configured": true, 00:15:05.467 "data_offset": 0, 00:15:05.467 "data_size": 65536 00:15:05.467 }, 00:15:05.467 { 00:15:05.467 "name": "BaseBdev4", 00:15:05.467 "uuid": "8646de3f-ae52-51c7-b4d2-ffeb794ecfaf", 00:15:05.467 "is_configured": true, 00:15:05.467 "data_offset": 0, 00:15:05.467 "data_size": 65536 00:15:05.467 } 00:15:05.467 ] 00:15:05.467 }' 00:15:05.467 03:17:48 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.467 03:17:48 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.986 143.00 IOPS, 429.00 MiB/s [2024-10-09T03:17:49.289Z] 03:17:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:05.986 03:17:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:05.986 03:17:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:05.986 03:17:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:05.986 03:17:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:05.986 03:17:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.986 03:17:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.986 03:17:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.986 03:17:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.986 03:17:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.986 03:17:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:05.986 "name": "raid_bdev1", 00:15:05.986 "uuid": "4ddf9b8c-531b-4b62-99cf-8100434b0b4a", 00:15:05.986 "strip_size_kb": 0, 00:15:05.986 "state": "online", 00:15:05.986 "raid_level": "raid1", 00:15:05.986 "superblock": false, 00:15:05.986 "num_base_bdevs": 4, 00:15:05.986 "num_base_bdevs_discovered": 3, 00:15:05.986 "num_base_bdevs_operational": 3, 00:15:05.986 "base_bdevs_list": [ 00:15:05.986 { 00:15:05.986 "name": null, 00:15:05.986 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.986 "is_configured": false, 00:15:05.987 "data_offset": 0, 00:15:05.987 "data_size": 65536 00:15:05.987 }, 00:15:05.987 { 00:15:05.987 "name": "BaseBdev2", 00:15:05.987 "uuid": "01013e68-c6a0-5d3b-ae6e-11e20e50db17", 00:15:05.987 "is_configured": true, 00:15:05.987 "data_offset": 0, 00:15:05.987 "data_size": 65536 00:15:05.987 }, 00:15:05.987 { 00:15:05.987 "name": "BaseBdev3", 00:15:05.987 "uuid": "6171797f-8ece-5ecb-91e5-331e4201aeef", 00:15:05.987 "is_configured": true, 00:15:05.987 "data_offset": 0, 00:15:05.987 "data_size": 65536 00:15:05.987 }, 00:15:05.987 { 00:15:05.987 "name": "BaseBdev4", 00:15:05.987 "uuid": "8646de3f-ae52-51c7-b4d2-ffeb794ecfaf", 00:15:05.987 "is_configured": true, 00:15:05.987 "data_offset": 0, 00:15:05.987 "data_size": 65536 00:15:05.987 } 00:15:05.987 ] 00:15:05.987 }' 00:15:05.987 03:17:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:05.987 03:17:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:05.987 03:17:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:05.987 03:17:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:05.987 03:17:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:05.987 03:17:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.987 03:17:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.987 [2024-10-09 03:17:49.271575] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:06.246 03:17:49 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.246 03:17:49 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:06.246 [2024-10-09 03:17:49.334476] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:06.246 [2024-10-09 03:17:49.336946] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:06.246 [2024-10-09 03:17:49.444859] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:06.246 [2024-10-09 03:17:49.447455] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:06.506 [2024-10-09 03:17:49.650468] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:06.506 [2024-10-09 03:17:49.651779] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:06.765 139.33 IOPS, 418.00 MiB/s [2024-10-09T03:17:50.068Z] [2024-10-09 03:17:49.999871] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:06.765 [2024-10-09 03:17:50.002101] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:07.025 [2024-10-09 03:17:50.220115] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:07.025 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:07.025 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.025 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:07.025 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:07.025 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.025 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.285 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.285 03:17:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.285 03:17:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.285 03:17:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.285 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.285 "name": "raid_bdev1", 00:15:07.285 "uuid": "4ddf9b8c-531b-4b62-99cf-8100434b0b4a", 00:15:07.285 "strip_size_kb": 0, 00:15:07.285 "state": "online", 00:15:07.285 "raid_level": "raid1", 00:15:07.285 "superblock": false, 00:15:07.285 "num_base_bdevs": 4, 00:15:07.285 "num_base_bdevs_discovered": 4, 00:15:07.285 "num_base_bdevs_operational": 4, 00:15:07.285 "process": { 00:15:07.285 "type": "rebuild", 00:15:07.285 "target": "spare", 00:15:07.285 "progress": { 00:15:07.285 "blocks": 10240, 00:15:07.285 "percent": 15 00:15:07.285 } 00:15:07.285 }, 00:15:07.285 "base_bdevs_list": [ 00:15:07.285 { 00:15:07.285 "name": "spare", 00:15:07.285 "uuid": "997a8071-5e88-5acd-b5a9-a4b582fa68c5", 00:15:07.285 "is_configured": true, 00:15:07.285 "data_offset": 0, 00:15:07.285 "data_size": 65536 00:15:07.285 }, 00:15:07.285 { 00:15:07.285 "name": "BaseBdev2", 00:15:07.285 "uuid": "01013e68-c6a0-5d3b-ae6e-11e20e50db17", 00:15:07.285 "is_configured": true, 00:15:07.285 "data_offset": 0, 00:15:07.285 "data_size": 65536 00:15:07.285 }, 00:15:07.285 { 00:15:07.285 "name": "BaseBdev3", 00:15:07.285 "uuid": "6171797f-8ece-5ecb-91e5-331e4201aeef", 00:15:07.285 "is_configured": true, 00:15:07.285 "data_offset": 0, 00:15:07.285 "data_size": 65536 00:15:07.285 }, 00:15:07.285 { 00:15:07.285 "name": "BaseBdev4", 00:15:07.285 "uuid": "8646de3f-ae52-51c7-b4d2-ffeb794ecfaf", 00:15:07.285 "is_configured": true, 00:15:07.285 "data_offset": 0, 00:15:07.285 "data_size": 65536 00:15:07.285 } 00:15:07.285 ] 00:15:07.285 }' 00:15:07.285 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.285 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:07.285 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.285 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:07.285 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:07.285 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:07.285 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:07.285 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:07.285 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:07.285 03:17:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.285 03:17:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.285 [2024-10-09 03:17:50.485364] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:07.285 [2024-10-09 03:17:50.485522] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:07.545 [2024-10-09 03:17:50.596525] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:15:07.545 [2024-10-09 03:17:50.596586] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:15:07.545 [2024-10-09 03:17:50.596664] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:07.545 03:17:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.545 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:07.546 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:07.546 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:07.546 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.546 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:07.546 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:07.546 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.546 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.546 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.546 03:17:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.546 03:17:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.546 03:17:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.546 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.546 "name": "raid_bdev1", 00:15:07.546 "uuid": "4ddf9b8c-531b-4b62-99cf-8100434b0b4a", 00:15:07.546 "strip_size_kb": 0, 00:15:07.546 "state": "online", 00:15:07.546 "raid_level": "raid1", 00:15:07.546 "superblock": false, 00:15:07.546 "num_base_bdevs": 4, 00:15:07.546 "num_base_bdevs_discovered": 3, 00:15:07.546 "num_base_bdevs_operational": 3, 00:15:07.546 "process": { 00:15:07.546 "type": "rebuild", 00:15:07.546 "target": "spare", 00:15:07.546 "progress": { 00:15:07.546 "blocks": 14336, 00:15:07.546 "percent": 21 00:15:07.546 } 00:15:07.546 }, 00:15:07.546 "base_bdevs_list": [ 00:15:07.546 { 00:15:07.546 "name": "spare", 00:15:07.546 "uuid": "997a8071-5e88-5acd-b5a9-a4b582fa68c5", 00:15:07.546 "is_configured": true, 00:15:07.546 "data_offset": 0, 00:15:07.546 "data_size": 65536 00:15:07.546 }, 00:15:07.546 { 00:15:07.546 "name": null, 00:15:07.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.546 "is_configured": false, 00:15:07.546 "data_offset": 0, 00:15:07.546 "data_size": 65536 00:15:07.546 }, 00:15:07.546 { 00:15:07.546 "name": "BaseBdev3", 00:15:07.546 "uuid": "6171797f-8ece-5ecb-91e5-331e4201aeef", 00:15:07.546 "is_configured": true, 00:15:07.546 "data_offset": 0, 00:15:07.546 "data_size": 65536 00:15:07.546 }, 00:15:07.546 { 00:15:07.546 "name": "BaseBdev4", 00:15:07.546 "uuid": "8646de3f-ae52-51c7-b4d2-ffeb794ecfaf", 00:15:07.546 "is_configured": true, 00:15:07.546 "data_offset": 0, 00:15:07.546 "data_size": 65536 00:15:07.546 } 00:15:07.546 ] 00:15:07.546 }' 00:15:07.546 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.546 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:07.546 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.546 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:07.546 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=497 00:15:07.546 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:07.546 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:07.546 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.546 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:07.546 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:07.546 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.546 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.546 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.546 03:17:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:07.546 03:17:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.546 03:17:50 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.546 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.546 "name": "raid_bdev1", 00:15:07.546 "uuid": "4ddf9b8c-531b-4b62-99cf-8100434b0b4a", 00:15:07.546 "strip_size_kb": 0, 00:15:07.546 "state": "online", 00:15:07.546 "raid_level": "raid1", 00:15:07.546 "superblock": false, 00:15:07.546 "num_base_bdevs": 4, 00:15:07.546 "num_base_bdevs_discovered": 3, 00:15:07.546 "num_base_bdevs_operational": 3, 00:15:07.546 "process": { 00:15:07.546 "type": "rebuild", 00:15:07.546 "target": "spare", 00:15:07.546 "progress": { 00:15:07.546 "blocks": 16384, 00:15:07.546 "percent": 25 00:15:07.546 } 00:15:07.546 }, 00:15:07.546 "base_bdevs_list": [ 00:15:07.546 { 00:15:07.546 "name": "spare", 00:15:07.546 "uuid": "997a8071-5e88-5acd-b5a9-a4b582fa68c5", 00:15:07.546 "is_configured": true, 00:15:07.546 "data_offset": 0, 00:15:07.546 "data_size": 65536 00:15:07.546 }, 00:15:07.546 { 00:15:07.546 "name": null, 00:15:07.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.546 "is_configured": false, 00:15:07.546 "data_offset": 0, 00:15:07.546 "data_size": 65536 00:15:07.546 }, 00:15:07.546 { 00:15:07.546 "name": "BaseBdev3", 00:15:07.546 "uuid": "6171797f-8ece-5ecb-91e5-331e4201aeef", 00:15:07.546 "is_configured": true, 00:15:07.546 "data_offset": 0, 00:15:07.546 "data_size": 65536 00:15:07.546 }, 00:15:07.546 { 00:15:07.546 "name": "BaseBdev4", 00:15:07.546 "uuid": "8646de3f-ae52-51c7-b4d2-ffeb794ecfaf", 00:15:07.546 "is_configured": true, 00:15:07.546 "data_offset": 0, 00:15:07.546 "data_size": 65536 00:15:07.546 } 00:15:07.546 ] 00:15:07.546 }' 00:15:07.546 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.546 127.50 IOPS, 382.50 MiB/s [2024-10-09T03:17:50.849Z] 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:07.546 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.874 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:07.874 03:17:50 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:07.874 [2024-10-09 03:17:50.998953] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:08.134 [2024-10-09 03:17:51.433192] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:15:08.393 [2024-10-09 03:17:51.559775] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:08.393 [2024-10-09 03:17:51.560324] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:08.653 115.20 IOPS, 345.60 MiB/s [2024-10-09T03:17:51.956Z] 03:17:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:08.653 03:17:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.653 03:17:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.653 03:17:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.653 03:17:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.653 03:17:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.653 03:17:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.653 03:17:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.653 03:17:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.653 03:17:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.653 03:17:51 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.653 [2024-10-09 03:17:51.927258] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:08.653 [2024-10-09 03:17:51.928186] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:08.653 03:17:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.653 "name": "raid_bdev1", 00:15:08.653 "uuid": "4ddf9b8c-531b-4b62-99cf-8100434b0b4a", 00:15:08.653 "strip_size_kb": 0, 00:15:08.653 "state": "online", 00:15:08.653 "raid_level": "raid1", 00:15:08.653 "superblock": false, 00:15:08.653 "num_base_bdevs": 4, 00:15:08.653 "num_base_bdevs_discovered": 3, 00:15:08.653 "num_base_bdevs_operational": 3, 00:15:08.653 "process": { 00:15:08.653 "type": "rebuild", 00:15:08.653 "target": "spare", 00:15:08.653 "progress": { 00:15:08.653 "blocks": 32768, 00:15:08.653 "percent": 50 00:15:08.653 } 00:15:08.653 }, 00:15:08.653 "base_bdevs_list": [ 00:15:08.653 { 00:15:08.653 "name": "spare", 00:15:08.653 "uuid": "997a8071-5e88-5acd-b5a9-a4b582fa68c5", 00:15:08.653 "is_configured": true, 00:15:08.653 "data_offset": 0, 00:15:08.653 "data_size": 65536 00:15:08.653 }, 00:15:08.653 { 00:15:08.653 "name": null, 00:15:08.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:08.653 "is_configured": false, 00:15:08.653 "data_offset": 0, 00:15:08.653 "data_size": 65536 00:15:08.653 }, 00:15:08.653 { 00:15:08.653 "name": "BaseBdev3", 00:15:08.653 "uuid": "6171797f-8ece-5ecb-91e5-331e4201aeef", 00:15:08.653 "is_configured": true, 00:15:08.653 "data_offset": 0, 00:15:08.653 "data_size": 65536 00:15:08.653 }, 00:15:08.653 { 00:15:08.653 "name": "BaseBdev4", 00:15:08.653 "uuid": "8646de3f-ae52-51c7-b4d2-ffeb794ecfaf", 00:15:08.653 "is_configured": true, 00:15:08.653 "data_offset": 0, 00:15:08.653 "data_size": 65536 00:15:08.653 } 00:15:08.653 ] 00:15:08.653 }' 00:15:08.653 03:17:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.917 03:17:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:08.917 03:17:51 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.917 03:17:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:08.917 03:17:52 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:09.236 [2024-10-09 03:17:52.383280] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:09.495 [2024-10-09 03:17:52.763862] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:15:09.755 104.17 IOPS, 312.50 MiB/s [2024-10-09T03:17:53.058Z] [2024-10-09 03:17:52.868199] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:15:09.755 03:17:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:09.755 03:17:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:09.755 03:17:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:09.755 03:17:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:09.755 03:17:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:09.755 03:17:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:09.755 03:17:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:09.755 03:17:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.755 03:17:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:09.755 03:17:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.015 03:17:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.015 03:17:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.015 "name": "raid_bdev1", 00:15:10.015 "uuid": "4ddf9b8c-531b-4b62-99cf-8100434b0b4a", 00:15:10.015 "strip_size_kb": 0, 00:15:10.015 "state": "online", 00:15:10.015 "raid_level": "raid1", 00:15:10.015 "superblock": false, 00:15:10.015 "num_base_bdevs": 4, 00:15:10.015 "num_base_bdevs_discovered": 3, 00:15:10.015 "num_base_bdevs_operational": 3, 00:15:10.015 "process": { 00:15:10.015 "type": "rebuild", 00:15:10.015 "target": "spare", 00:15:10.015 "progress": { 00:15:10.015 "blocks": 49152, 00:15:10.015 "percent": 75 00:15:10.015 } 00:15:10.015 }, 00:15:10.015 "base_bdevs_list": [ 00:15:10.015 { 00:15:10.015 "name": "spare", 00:15:10.015 "uuid": "997a8071-5e88-5acd-b5a9-a4b582fa68c5", 00:15:10.015 "is_configured": true, 00:15:10.015 "data_offset": 0, 00:15:10.015 "data_size": 65536 00:15:10.015 }, 00:15:10.015 { 00:15:10.015 "name": null, 00:15:10.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:10.015 "is_configured": false, 00:15:10.015 "data_offset": 0, 00:15:10.015 "data_size": 65536 00:15:10.015 }, 00:15:10.015 { 00:15:10.015 "name": "BaseBdev3", 00:15:10.015 "uuid": "6171797f-8ece-5ecb-91e5-331e4201aeef", 00:15:10.015 "is_configured": true, 00:15:10.015 "data_offset": 0, 00:15:10.015 "data_size": 65536 00:15:10.015 }, 00:15:10.015 { 00:15:10.015 "name": "BaseBdev4", 00:15:10.015 "uuid": "8646de3f-ae52-51c7-b4d2-ffeb794ecfaf", 00:15:10.015 "is_configured": true, 00:15:10.015 "data_offset": 0, 00:15:10.015 "data_size": 65536 00:15:10.015 } 00:15:10.015 ] 00:15:10.015 }' 00:15:10.015 03:17:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.015 03:17:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:10.015 03:17:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.015 03:17:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:10.015 03:17:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:10.584 94.43 IOPS, 283.29 MiB/s [2024-10-09T03:17:53.887Z] [2024-10-09 03:17:53.861434] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:10.843 [2024-10-09 03:17:53.961277] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:10.843 [2024-10-09 03:17:53.966334] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.103 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:11.103 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.103 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.103 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.103 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.103 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.103 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.103 03:17:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.103 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.103 03:17:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.103 03:17:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.103 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.103 "name": "raid_bdev1", 00:15:11.103 "uuid": "4ddf9b8c-531b-4b62-99cf-8100434b0b4a", 00:15:11.103 "strip_size_kb": 0, 00:15:11.103 "state": "online", 00:15:11.103 "raid_level": "raid1", 00:15:11.103 "superblock": false, 00:15:11.103 "num_base_bdevs": 4, 00:15:11.103 "num_base_bdevs_discovered": 3, 00:15:11.103 "num_base_bdevs_operational": 3, 00:15:11.103 "base_bdevs_list": [ 00:15:11.103 { 00:15:11.103 "name": "spare", 00:15:11.103 "uuid": "997a8071-5e88-5acd-b5a9-a4b582fa68c5", 00:15:11.103 "is_configured": true, 00:15:11.103 "data_offset": 0, 00:15:11.103 "data_size": 65536 00:15:11.103 }, 00:15:11.103 { 00:15:11.103 "name": null, 00:15:11.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.103 "is_configured": false, 00:15:11.103 "data_offset": 0, 00:15:11.103 "data_size": 65536 00:15:11.103 }, 00:15:11.103 { 00:15:11.103 "name": "BaseBdev3", 00:15:11.103 "uuid": "6171797f-8ece-5ecb-91e5-331e4201aeef", 00:15:11.103 "is_configured": true, 00:15:11.103 "data_offset": 0, 00:15:11.103 "data_size": 65536 00:15:11.103 }, 00:15:11.103 { 00:15:11.103 "name": "BaseBdev4", 00:15:11.103 "uuid": "8646de3f-ae52-51c7-b4d2-ffeb794ecfaf", 00:15:11.103 "is_configured": true, 00:15:11.103 "data_offset": 0, 00:15:11.103 "data_size": 65536 00:15:11.103 } 00:15:11.103 ] 00:15:11.103 }' 00:15:11.103 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.103 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:11.103 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.103 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:11.103 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:15:11.103 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:11.103 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.103 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:11.103 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:11.103 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.103 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.103 03:17:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.104 03:17:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.104 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.104 03:17:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.104 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.104 "name": "raid_bdev1", 00:15:11.104 "uuid": "4ddf9b8c-531b-4b62-99cf-8100434b0b4a", 00:15:11.104 "strip_size_kb": 0, 00:15:11.104 "state": "online", 00:15:11.104 "raid_level": "raid1", 00:15:11.104 "superblock": false, 00:15:11.104 "num_base_bdevs": 4, 00:15:11.104 "num_base_bdevs_discovered": 3, 00:15:11.104 "num_base_bdevs_operational": 3, 00:15:11.104 "base_bdevs_list": [ 00:15:11.104 { 00:15:11.104 "name": "spare", 00:15:11.104 "uuid": "997a8071-5e88-5acd-b5a9-a4b582fa68c5", 00:15:11.104 "is_configured": true, 00:15:11.104 "data_offset": 0, 00:15:11.104 "data_size": 65536 00:15:11.104 }, 00:15:11.104 { 00:15:11.104 "name": null, 00:15:11.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.104 "is_configured": false, 00:15:11.104 "data_offset": 0, 00:15:11.104 "data_size": 65536 00:15:11.104 }, 00:15:11.104 { 00:15:11.104 "name": "BaseBdev3", 00:15:11.104 "uuid": "6171797f-8ece-5ecb-91e5-331e4201aeef", 00:15:11.104 "is_configured": true, 00:15:11.104 "data_offset": 0, 00:15:11.104 "data_size": 65536 00:15:11.104 }, 00:15:11.104 { 00:15:11.104 "name": "BaseBdev4", 00:15:11.104 "uuid": "8646de3f-ae52-51c7-b4d2-ffeb794ecfaf", 00:15:11.104 "is_configured": true, 00:15:11.104 "data_offset": 0, 00:15:11.104 "data_size": 65536 00:15:11.104 } 00:15:11.104 ] 00:15:11.104 }' 00:15:11.104 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.104 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:11.104 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.364 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:11.364 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:11.364 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:11.364 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:11.364 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:11.364 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:11.364 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:11.364 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:11.364 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:11.364 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:11.364 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:11.364 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.364 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.364 03:17:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.364 03:17:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.364 03:17:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.364 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:11.364 "name": "raid_bdev1", 00:15:11.364 "uuid": "4ddf9b8c-531b-4b62-99cf-8100434b0b4a", 00:15:11.364 "strip_size_kb": 0, 00:15:11.364 "state": "online", 00:15:11.364 "raid_level": "raid1", 00:15:11.364 "superblock": false, 00:15:11.364 "num_base_bdevs": 4, 00:15:11.364 "num_base_bdevs_discovered": 3, 00:15:11.364 "num_base_bdevs_operational": 3, 00:15:11.364 "base_bdevs_list": [ 00:15:11.364 { 00:15:11.364 "name": "spare", 00:15:11.364 "uuid": "997a8071-5e88-5acd-b5a9-a4b582fa68c5", 00:15:11.364 "is_configured": true, 00:15:11.364 "data_offset": 0, 00:15:11.364 "data_size": 65536 00:15:11.364 }, 00:15:11.364 { 00:15:11.364 "name": null, 00:15:11.364 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:11.364 "is_configured": false, 00:15:11.364 "data_offset": 0, 00:15:11.364 "data_size": 65536 00:15:11.364 }, 00:15:11.364 { 00:15:11.364 "name": "BaseBdev3", 00:15:11.364 "uuid": "6171797f-8ece-5ecb-91e5-331e4201aeef", 00:15:11.364 "is_configured": true, 00:15:11.364 "data_offset": 0, 00:15:11.364 "data_size": 65536 00:15:11.364 }, 00:15:11.364 { 00:15:11.364 "name": "BaseBdev4", 00:15:11.364 "uuid": "8646de3f-ae52-51c7-b4d2-ffeb794ecfaf", 00:15:11.364 "is_configured": true, 00:15:11.364 "data_offset": 0, 00:15:11.364 "data_size": 65536 00:15:11.364 } 00:15:11.364 ] 00:15:11.364 }' 00:15:11.364 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:11.364 03:17:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.624 86.38 IOPS, 259.12 MiB/s [2024-10-09T03:17:54.927Z] 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:11.624 03:17:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.624 03:17:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.624 [2024-10-09 03:17:54.847027] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:11.624 [2024-10-09 03:17:54.847089] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:11.884 00:15:11.884 Latency(us) 00:15:11.884 [2024-10-09T03:17:55.187Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:11.884 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:11.884 raid_bdev1 : 8.15 85.49 256.47 0.00 0.00 15790.98 313.01 116304.94 00:15:11.884 [2024-10-09T03:17:55.187Z] =================================================================================================================== 00:15:11.884 [2024-10-09T03:17:55.187Z] Total : 85.49 256.47 0.00 0.00 15790.98 313.01 116304.94 00:15:11.884 [2024-10-09 03:17:54.973362] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:11.884 [2024-10-09 03:17:54.973434] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:11.884 [2024-10-09 03:17:54.973567] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:11.884 [2024-10-09 03:17:54.973583] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:11.884 { 00:15:11.884 "results": [ 00:15:11.884 { 00:15:11.884 "job": "raid_bdev1", 00:15:11.884 "core_mask": "0x1", 00:15:11.884 "workload": "randrw", 00:15:11.884 "percentage": 50, 00:15:11.884 "status": "finished", 00:15:11.884 "queue_depth": 2, 00:15:11.884 "io_size": 3145728, 00:15:11.884 "runtime": 8.153132, 00:15:11.884 "iops": 85.48861958815337, 00:15:11.884 "mibps": 256.4658587644601, 00:15:11.884 "io_failed": 0, 00:15:11.884 "io_timeout": 0, 00:15:11.884 "avg_latency_us": 15790.97522131656, 00:15:11.884 "min_latency_us": 313.0131004366812, 00:15:11.884 "max_latency_us": 116304.93624454149 00:15:11.884 } 00:15:11.884 ], 00:15:11.884 "core_count": 1 00:15:11.884 } 00:15:11.884 03:17:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.884 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.884 03:17:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.884 03:17:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:11.884 03:17:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.884 03:17:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.884 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:11.884 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:11.884 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:11.884 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:11.884 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:11.884 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:11.884 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:11.884 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:11.884 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:11.884 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:11.884 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:11.884 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:11.884 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:12.144 /dev/nbd0 00:15:12.144 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:12.144 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:12.144 03:17:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:12.144 03:17:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:15:12.144 03:17:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:12.144 03:17:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:12.144 03:17:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:12.144 03:17:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:15:12.144 03:17:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:12.144 03:17:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:12.144 03:17:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:12.144 1+0 records in 00:15:12.144 1+0 records out 00:15:12.144 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000391711 s, 10.5 MB/s 00:15:12.144 03:17:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.144 03:17:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:15:12.144 03:17:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.144 03:17:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:12.144 03:17:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:15:12.144 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:12.144 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:12.144 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:12.144 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:12.144 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:12.144 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:12.144 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:12.144 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:12.144 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:12.144 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:12.144 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:12.144 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:12.144 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:12.144 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:12.144 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:12.144 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:12.144 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:12.404 /dev/nbd1 00:15:12.404 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:12.404 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:12.404 03:17:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:12.404 03:17:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:15:12.404 03:17:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:12.404 03:17:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:12.404 03:17:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:12.404 03:17:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:15:12.404 03:17:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:12.404 03:17:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:12.404 03:17:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:12.404 1+0 records in 00:15:12.404 1+0 records out 00:15:12.404 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000364939 s, 11.2 MB/s 00:15:12.404 03:17:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.404 03:17:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:15:12.404 03:17:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.404 03:17:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:12.404 03:17:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:15:12.404 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:12.404 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:12.404 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:12.664 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:12.664 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:12.664 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:12.664 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:12.664 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:12.664 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:12.664 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:12.664 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:12.664 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:12.664 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:12.664 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:12.664 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:12.664 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:12.664 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:12.664 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:12.664 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:12.664 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:12.664 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:12.664 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:12.664 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:12.664 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:12.664 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:12.664 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:12.664 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:12.664 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:12.664 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:12.664 03:17:55 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:12.924 /dev/nbd1 00:15:12.924 03:17:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:12.924 03:17:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:12.924 03:17:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:12.924 03:17:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@869 -- # local i 00:15:12.924 03:17:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:12.924 03:17:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:12.924 03:17:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:12.924 03:17:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # break 00:15:12.924 03:17:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:12.924 03:17:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:12.924 03:17:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:12.924 1+0 records in 00:15:12.924 1+0 records out 00:15:12.924 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000353931 s, 11.6 MB/s 00:15:12.924 03:17:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.924 03:17:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@886 -- # size=4096 00:15:12.924 03:17:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:12.924 03:17:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:12.924 03:17:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # return 0 00:15:12.924 03:17:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:12.924 03:17:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:12.924 03:17:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:13.184 03:17:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:13.184 03:17:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:13.184 03:17:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:13.184 03:17:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:13.184 03:17:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:13.184 03:17:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:13.184 03:17:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:13.443 03:17:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:13.444 03:17:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:13.444 03:17:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:13.444 03:17:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:13.444 03:17:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:13.444 03:17:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:13.444 03:17:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:13.444 03:17:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:13.444 03:17:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:13.444 03:17:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:13.444 03:17:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:13.444 03:17:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:13.444 03:17:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:13.444 03:17:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:13.444 03:17:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:13.444 03:17:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:13.444 03:17:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:13.444 03:17:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:13.444 03:17:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:13.444 03:17:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:13.444 03:17:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:13.444 03:17:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:13.444 03:17:56 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:13.444 03:17:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:13.444 03:17:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78997 00:15:13.444 03:17:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@950 -- # '[' -z 78997 ']' 00:15:13.444 03:17:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # kill -0 78997 00:15:13.444 03:17:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # uname 00:15:13.444 03:17:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:13.704 03:17:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 78997 00:15:13.704 killing process with pid 78997 00:15:13.704 Received shutdown signal, test time was about 9.988764 seconds 00:15:13.704 00:15:13.704 Latency(us) 00:15:13.704 [2024-10-09T03:17:57.007Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.704 [2024-10-09T03:17:57.007Z] =================================================================================================================== 00:15:13.704 [2024-10-09T03:17:57.007Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:13.704 03:17:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:13.704 03:17:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:13.704 03:17:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 78997' 00:15:13.704 03:17:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@969 -- # kill 78997 00:15:13.704 [2024-10-09 03:17:56.778829] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:13.704 03:17:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@974 -- # wait 78997 00:15:13.963 [2024-10-09 03:17:57.230772] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:15.872 03:17:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:15.872 00:15:15.872 real 0m13.960s 00:15:15.872 user 0m17.255s 00:15:15.872 sys 0m1.979s 00:15:15.872 03:17:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:15.872 03:17:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.872 ************************************ 00:15:15.872 END TEST raid_rebuild_test_io 00:15:15.872 ************************************ 00:15:15.872 03:17:58 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:15:15.872 03:17:58 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:15:15.872 03:17:58 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:15.872 03:17:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:15.872 ************************************ 00:15:15.872 START TEST raid_rebuild_test_sb_io 00:15:15.872 ************************************ 00:15:15.872 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 4 true true true 00:15:15.872 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:15.872 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:15.872 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:15.872 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:15.872 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:15.872 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:15.872 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:15.873 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:15.873 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:15.873 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:15.873 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:15.873 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:15.873 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:15.873 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:15.873 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:15.873 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:15.873 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:15.873 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:15.873 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:15.873 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:15.873 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:15.873 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:15.873 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:15.873 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:15.873 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:15.873 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:15.873 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:15.873 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:15.873 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:15.873 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:15.873 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79406 00:15:15.873 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:15.873 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79406 00:15:15.873 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@831 -- # '[' -z 79406 ']' 00:15:15.873 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:15.873 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:15.873 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:15.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:15.873 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:15.873 03:17:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.873 [2024-10-09 03:17:58.937546] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:15:15.873 [2024-10-09 03:17:58.937747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:15:15.873 Zero copy mechanism will not be used. 00:15:15.873 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79406 ] 00:15:15.873 [2024-10-09 03:17:59.101586] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:16.133 [2024-10-09 03:17:59.375563] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:16.392 [2024-10-09 03:17:59.619281] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:16.392 [2024-10-09 03:17:59.619334] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:16.652 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:16.652 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # return 0 00:15:16.652 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:16.652 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:16.652 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.652 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.652 BaseBdev1_malloc 00:15:16.652 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.652 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:16.652 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.652 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.652 [2024-10-09 03:17:59.837887] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:16.652 [2024-10-09 03:17:59.838063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.652 [2024-10-09 03:17:59.838114] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:16.652 [2024-10-09 03:17:59.838155] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.652 [2024-10-09 03:17:59.840798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.652 [2024-10-09 03:17:59.840896] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:16.652 BaseBdev1 00:15:16.652 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.652 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:16.652 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:16.652 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.652 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.652 BaseBdev2_malloc 00:15:16.652 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.652 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:16.652 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.652 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.652 [2024-10-09 03:17:59.916579] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:16.652 [2024-10-09 03:17:59.916780] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.652 [2024-10-09 03:17:59.916815] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:16.652 [2024-10-09 03:17:59.916830] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.652 [2024-10-09 03:17:59.919700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.652 BaseBdev2 00:15:16.652 [2024-10-09 03:17:59.919810] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:16.652 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.653 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:16.653 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:16.653 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.653 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.916 BaseBdev3_malloc 00:15:16.916 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.916 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:16.916 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.916 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.916 [2024-10-09 03:17:59.985947] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:16.916 [2024-10-09 03:17:59.986029] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.916 [2024-10-09 03:17:59.986058] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:16.916 [2024-10-09 03:17:59.986072] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.916 [2024-10-09 03:17:59.988736] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.916 [2024-10-09 03:17:59.988794] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:16.916 BaseBdev3 00:15:16.916 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.916 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:16.916 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:16.916 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.916 03:17:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.916 BaseBdev4_malloc 00:15:16.916 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.916 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:16.916 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.916 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.916 [2024-10-09 03:18:00.051885] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:16.916 [2024-10-09 03:18:00.052050] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.916 [2024-10-09 03:18:00.052097] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:16.916 [2024-10-09 03:18:00.052136] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.916 [2024-10-09 03:18:00.054727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.916 [2024-10-09 03:18:00.054820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:16.916 BaseBdev4 00:15:16.916 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.916 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:16.916 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.916 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.916 spare_malloc 00:15:16.916 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.916 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:16.916 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.916 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.916 spare_delay 00:15:16.916 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.916 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:16.916 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.916 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.916 [2024-10-09 03:18:00.129518] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:16.916 [2024-10-09 03:18:00.129695] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:16.916 [2024-10-09 03:18:00.129742] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:16.916 [2024-10-09 03:18:00.129785] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:16.916 [2024-10-09 03:18:00.132340] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:16.916 [2024-10-09 03:18:00.132424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:16.916 spare 00:15:16.917 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.917 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:16.917 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.917 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.917 [2024-10-09 03:18:00.141553] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:16.917 [2024-10-09 03:18:00.143717] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:16.917 [2024-10-09 03:18:00.143861] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:16.917 [2024-10-09 03:18:00.143946] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:16.917 [2024-10-09 03:18:00.144179] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:16.917 [2024-10-09 03:18:00.144229] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:16.917 [2024-10-09 03:18:00.144528] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:16.917 [2024-10-09 03:18:00.144773] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:16.917 [2024-10-09 03:18:00.144819] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:16.917 [2024-10-09 03:18:00.145040] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.917 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.917 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:16.917 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.917 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.917 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:16.917 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:16.917 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:16.917 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.917 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.917 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.917 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.917 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.917 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.917 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.917 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.917 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.917 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.917 "name": "raid_bdev1", 00:15:16.917 "uuid": "579c7d2b-6242-47a8-a86d-024c4f6b67da", 00:15:16.917 "strip_size_kb": 0, 00:15:16.917 "state": "online", 00:15:16.917 "raid_level": "raid1", 00:15:16.917 "superblock": true, 00:15:16.917 "num_base_bdevs": 4, 00:15:16.917 "num_base_bdevs_discovered": 4, 00:15:16.917 "num_base_bdevs_operational": 4, 00:15:16.917 "base_bdevs_list": [ 00:15:16.917 { 00:15:16.917 "name": "BaseBdev1", 00:15:16.917 "uuid": "e1411ee9-29c4-5d81-a534-fed441cb23d7", 00:15:16.917 "is_configured": true, 00:15:16.917 "data_offset": 2048, 00:15:16.917 "data_size": 63488 00:15:16.917 }, 00:15:16.917 { 00:15:16.917 "name": "BaseBdev2", 00:15:16.917 "uuid": "3fcb10cb-6752-556d-bb57-9132aea0c9d1", 00:15:16.917 "is_configured": true, 00:15:16.917 "data_offset": 2048, 00:15:16.917 "data_size": 63488 00:15:16.917 }, 00:15:16.917 { 00:15:16.917 "name": "BaseBdev3", 00:15:16.917 "uuid": "17fcbab6-826d-5b86-888c-f670ab0b378d", 00:15:16.917 "is_configured": true, 00:15:16.917 "data_offset": 2048, 00:15:16.917 "data_size": 63488 00:15:16.917 }, 00:15:16.917 { 00:15:16.917 "name": "BaseBdev4", 00:15:16.917 "uuid": "fbc5b63b-82b9-54f6-b48d-a0c1727287f7", 00:15:16.917 "is_configured": true, 00:15:16.917 "data_offset": 2048, 00:15:16.917 "data_size": 63488 00:15:16.917 } 00:15:16.917 ] 00:15:16.917 }' 00:15:16.917 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.917 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.498 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:17.498 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:17.498 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.498 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.498 [2024-10-09 03:18:00.601326] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:17.498 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.498 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:17.498 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:17.498 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.498 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.498 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.498 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.498 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:17.498 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:17.498 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:17.498 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:17.498 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.498 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.498 [2024-10-09 03:18:00.664758] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:17.498 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.498 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:17.498 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.498 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.498 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:17.498 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:17.498 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:17.498 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.498 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.498 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.498 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.498 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.498 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.498 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.498 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.498 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.498 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.498 "name": "raid_bdev1", 00:15:17.498 "uuid": "579c7d2b-6242-47a8-a86d-024c4f6b67da", 00:15:17.498 "strip_size_kb": 0, 00:15:17.498 "state": "online", 00:15:17.498 "raid_level": "raid1", 00:15:17.498 "superblock": true, 00:15:17.498 "num_base_bdevs": 4, 00:15:17.498 "num_base_bdevs_discovered": 3, 00:15:17.498 "num_base_bdevs_operational": 3, 00:15:17.498 "base_bdevs_list": [ 00:15:17.498 { 00:15:17.498 "name": null, 00:15:17.498 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.498 "is_configured": false, 00:15:17.498 "data_offset": 0, 00:15:17.498 "data_size": 63488 00:15:17.498 }, 00:15:17.498 { 00:15:17.498 "name": "BaseBdev2", 00:15:17.498 "uuid": "3fcb10cb-6752-556d-bb57-9132aea0c9d1", 00:15:17.498 "is_configured": true, 00:15:17.498 "data_offset": 2048, 00:15:17.498 "data_size": 63488 00:15:17.498 }, 00:15:17.498 { 00:15:17.498 "name": "BaseBdev3", 00:15:17.498 "uuid": "17fcbab6-826d-5b86-888c-f670ab0b378d", 00:15:17.498 "is_configured": true, 00:15:17.498 "data_offset": 2048, 00:15:17.498 "data_size": 63488 00:15:17.498 }, 00:15:17.498 { 00:15:17.498 "name": "BaseBdev4", 00:15:17.498 "uuid": "fbc5b63b-82b9-54f6-b48d-a0c1727287f7", 00:15:17.498 "is_configured": true, 00:15:17.498 "data_offset": 2048, 00:15:17.498 "data_size": 63488 00:15:17.498 } 00:15:17.498 ] 00:15:17.498 }' 00:15:17.498 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.498 03:18:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.498 [2024-10-09 03:18:00.762331] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:17.498 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:17.498 Zero copy mechanism will not be used. 00:15:17.498 Running I/O for 60 seconds... 00:15:18.067 03:18:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:18.067 03:18:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.067 03:18:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.067 [2024-10-09 03:18:01.125169] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:18.067 03:18:01 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.067 03:18:01 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:18.067 [2024-10-09 03:18:01.180978] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:15:18.067 [2024-10-09 03:18:01.183566] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:18.068 [2024-10-09 03:18:01.304196] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:18.068 [2024-10-09 03:18:01.306698] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:18.327 [2024-10-09 03:18:01.518607] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:18.327 [2024-10-09 03:18:01.519303] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:18.586 125.00 IOPS, 375.00 MiB/s [2024-10-09T03:18:01.889Z] [2024-10-09 03:18:01.874861] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:18.846 [2024-10-09 03:18:02.107934] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:18.846 [2024-10-09 03:18:02.109464] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:19.105 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:19.105 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.105 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:19.105 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:19.105 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.105 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.105 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.105 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.105 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.105 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.105 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.105 "name": "raid_bdev1", 00:15:19.105 "uuid": "579c7d2b-6242-47a8-a86d-024c4f6b67da", 00:15:19.105 "strip_size_kb": 0, 00:15:19.105 "state": "online", 00:15:19.105 "raid_level": "raid1", 00:15:19.105 "superblock": true, 00:15:19.105 "num_base_bdevs": 4, 00:15:19.105 "num_base_bdevs_discovered": 4, 00:15:19.105 "num_base_bdevs_operational": 4, 00:15:19.105 "process": { 00:15:19.105 "type": "rebuild", 00:15:19.105 "target": "spare", 00:15:19.105 "progress": { 00:15:19.105 "blocks": 10240, 00:15:19.105 "percent": 16 00:15:19.105 } 00:15:19.105 }, 00:15:19.106 "base_bdevs_list": [ 00:15:19.106 { 00:15:19.106 "name": "spare", 00:15:19.106 "uuid": "9b6eb536-f567-5311-a991-ea47128d2980", 00:15:19.106 "is_configured": true, 00:15:19.106 "data_offset": 2048, 00:15:19.106 "data_size": 63488 00:15:19.106 }, 00:15:19.106 { 00:15:19.106 "name": "BaseBdev2", 00:15:19.106 "uuid": "3fcb10cb-6752-556d-bb57-9132aea0c9d1", 00:15:19.106 "is_configured": true, 00:15:19.106 "data_offset": 2048, 00:15:19.106 "data_size": 63488 00:15:19.106 }, 00:15:19.106 { 00:15:19.106 "name": "BaseBdev3", 00:15:19.106 "uuid": "17fcbab6-826d-5b86-888c-f670ab0b378d", 00:15:19.106 "is_configured": true, 00:15:19.106 "data_offset": 2048, 00:15:19.106 "data_size": 63488 00:15:19.106 }, 00:15:19.106 { 00:15:19.106 "name": "BaseBdev4", 00:15:19.106 "uuid": "fbc5b63b-82b9-54f6-b48d-a0c1727287f7", 00:15:19.106 "is_configured": true, 00:15:19.106 "data_offset": 2048, 00:15:19.106 "data_size": 63488 00:15:19.106 } 00:15:19.106 ] 00:15:19.106 }' 00:15:19.106 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.106 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:19.106 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.106 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:19.106 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:19.106 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.106 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.106 [2024-10-09 03:18:02.300627] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:19.366 [2024-10-09 03:18:02.452460] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:19.366 [2024-10-09 03:18:02.460961] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.366 [2024-10-09 03:18:02.461145] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:19.366 [2024-10-09 03:18:02.461181] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:19.366 [2024-10-09 03:18:02.482371] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:15:19.366 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.366 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:19.366 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.366 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.366 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:19.366 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:19.366 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:19.366 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.366 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.366 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.366 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.366 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.366 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.366 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.366 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.366 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.366 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.366 "name": "raid_bdev1", 00:15:19.366 "uuid": "579c7d2b-6242-47a8-a86d-024c4f6b67da", 00:15:19.366 "strip_size_kb": 0, 00:15:19.366 "state": "online", 00:15:19.366 "raid_level": "raid1", 00:15:19.366 "superblock": true, 00:15:19.366 "num_base_bdevs": 4, 00:15:19.366 "num_base_bdevs_discovered": 3, 00:15:19.366 "num_base_bdevs_operational": 3, 00:15:19.366 "base_bdevs_list": [ 00:15:19.366 { 00:15:19.366 "name": null, 00:15:19.366 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.366 "is_configured": false, 00:15:19.366 "data_offset": 0, 00:15:19.366 "data_size": 63488 00:15:19.366 }, 00:15:19.366 { 00:15:19.366 "name": "BaseBdev2", 00:15:19.366 "uuid": "3fcb10cb-6752-556d-bb57-9132aea0c9d1", 00:15:19.366 "is_configured": true, 00:15:19.366 "data_offset": 2048, 00:15:19.366 "data_size": 63488 00:15:19.366 }, 00:15:19.366 { 00:15:19.366 "name": "BaseBdev3", 00:15:19.366 "uuid": "17fcbab6-826d-5b86-888c-f670ab0b378d", 00:15:19.366 "is_configured": true, 00:15:19.366 "data_offset": 2048, 00:15:19.366 "data_size": 63488 00:15:19.366 }, 00:15:19.367 { 00:15:19.367 "name": "BaseBdev4", 00:15:19.367 "uuid": "fbc5b63b-82b9-54f6-b48d-a0c1727287f7", 00:15:19.367 "is_configured": true, 00:15:19.367 "data_offset": 2048, 00:15:19.367 "data_size": 63488 00:15:19.367 } 00:15:19.367 ] 00:15:19.367 }' 00:15:19.367 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.367 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.887 115.00 IOPS, 345.00 MiB/s [2024-10-09T03:18:03.190Z] 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:19.887 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.887 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:19.887 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:19.887 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.887 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.887 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.887 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.887 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.887 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.887 03:18:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.887 "name": "raid_bdev1", 00:15:19.887 "uuid": "579c7d2b-6242-47a8-a86d-024c4f6b67da", 00:15:19.887 "strip_size_kb": 0, 00:15:19.887 "state": "online", 00:15:19.887 "raid_level": "raid1", 00:15:19.887 "superblock": true, 00:15:19.887 "num_base_bdevs": 4, 00:15:19.887 "num_base_bdevs_discovered": 3, 00:15:19.887 "num_base_bdevs_operational": 3, 00:15:19.887 "base_bdevs_list": [ 00:15:19.887 { 00:15:19.887 "name": null, 00:15:19.887 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.887 "is_configured": false, 00:15:19.887 "data_offset": 0, 00:15:19.887 "data_size": 63488 00:15:19.887 }, 00:15:19.887 { 00:15:19.887 "name": "BaseBdev2", 00:15:19.887 "uuid": "3fcb10cb-6752-556d-bb57-9132aea0c9d1", 00:15:19.887 "is_configured": true, 00:15:19.887 "data_offset": 2048, 00:15:19.887 "data_size": 63488 00:15:19.887 }, 00:15:19.887 { 00:15:19.887 "name": "BaseBdev3", 00:15:19.887 "uuid": "17fcbab6-826d-5b86-888c-f670ab0b378d", 00:15:19.887 "is_configured": true, 00:15:19.887 "data_offset": 2048, 00:15:19.887 "data_size": 63488 00:15:19.887 }, 00:15:19.887 { 00:15:19.887 "name": "BaseBdev4", 00:15:19.887 "uuid": "fbc5b63b-82b9-54f6-b48d-a0c1727287f7", 00:15:19.887 "is_configured": true, 00:15:19.887 "data_offset": 2048, 00:15:19.887 "data_size": 63488 00:15:19.887 } 00:15:19.887 ] 00:15:19.887 }' 00:15:19.887 03:18:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.887 03:18:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:19.887 03:18:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.887 03:18:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:19.887 03:18:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:19.887 03:18:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.887 03:18:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.887 [2024-10-09 03:18:03.099190] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:19.887 03:18:03 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.887 03:18:03 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:19.887 [2024-10-09 03:18:03.152044] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:15:19.887 [2024-10-09 03:18:03.154488] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:20.147 [2024-10-09 03:18:03.286742] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:20.147 [2024-10-09 03:18:03.287449] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:20.406 [2024-10-09 03:18:03.502897] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:20.406 [2024-10-09 03:18:03.503431] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:20.926 126.00 IOPS, 378.00 MiB/s [2024-10-09T03:18:04.229Z] [2024-10-09 03:18:03.986473] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:20.926 [2024-10-09 03:18:03.987029] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:20.926 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:20.926 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.926 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:20.926 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:20.926 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.926 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.926 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.926 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.926 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.926 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.926 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.926 "name": "raid_bdev1", 00:15:20.926 "uuid": "579c7d2b-6242-47a8-a86d-024c4f6b67da", 00:15:20.926 "strip_size_kb": 0, 00:15:20.926 "state": "online", 00:15:20.926 "raid_level": "raid1", 00:15:20.926 "superblock": true, 00:15:20.926 "num_base_bdevs": 4, 00:15:20.926 "num_base_bdevs_discovered": 4, 00:15:20.926 "num_base_bdevs_operational": 4, 00:15:20.926 "process": { 00:15:20.926 "type": "rebuild", 00:15:20.926 "target": "spare", 00:15:20.926 "progress": { 00:15:20.926 "blocks": 10240, 00:15:20.926 "percent": 16 00:15:20.926 } 00:15:20.926 }, 00:15:20.926 "base_bdevs_list": [ 00:15:20.926 { 00:15:20.926 "name": "spare", 00:15:20.926 "uuid": "9b6eb536-f567-5311-a991-ea47128d2980", 00:15:20.926 "is_configured": true, 00:15:20.926 "data_offset": 2048, 00:15:20.926 "data_size": 63488 00:15:20.926 }, 00:15:20.926 { 00:15:20.926 "name": "BaseBdev2", 00:15:20.926 "uuid": "3fcb10cb-6752-556d-bb57-9132aea0c9d1", 00:15:20.926 "is_configured": true, 00:15:20.926 "data_offset": 2048, 00:15:20.926 "data_size": 63488 00:15:20.926 }, 00:15:20.926 { 00:15:20.926 "name": "BaseBdev3", 00:15:20.926 "uuid": "17fcbab6-826d-5b86-888c-f670ab0b378d", 00:15:20.926 "is_configured": true, 00:15:20.926 "data_offset": 2048, 00:15:20.926 "data_size": 63488 00:15:20.926 }, 00:15:20.926 { 00:15:20.926 "name": "BaseBdev4", 00:15:20.926 "uuid": "fbc5b63b-82b9-54f6-b48d-a0c1727287f7", 00:15:20.926 "is_configured": true, 00:15:20.926 "data_offset": 2048, 00:15:20.926 "data_size": 63488 00:15:20.926 } 00:15:20.926 ] 00:15:20.926 }' 00:15:20.926 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.186 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:21.186 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.186 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:21.186 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:21.186 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:21.186 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:21.186 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:21.186 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:21.186 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:21.186 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:21.186 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.186 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.186 [2024-10-09 03:18:04.308376] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:21.186 [2024-10-09 03:18:04.317025] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:21.186 [2024-10-09 03:18:04.319554] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:21.446 [2024-10-09 03:18:04.522168] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:15:21.446 [2024-10-09 03:18:04.522334] bdev_raid.c:1970:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:15:21.446 [2024-10-09 03:18:04.522416] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:21.446 [2024-10-09 03:18:04.526437] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:21.446 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.446 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:21.446 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:21.446 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:21.447 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.447 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:21.447 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:21.447 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.447 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.447 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.447 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.447 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.447 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.447 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.447 "name": "raid_bdev1", 00:15:21.447 "uuid": "579c7d2b-6242-47a8-a86d-024c4f6b67da", 00:15:21.447 "strip_size_kb": 0, 00:15:21.447 "state": "online", 00:15:21.447 "raid_level": "raid1", 00:15:21.447 "superblock": true, 00:15:21.447 "num_base_bdevs": 4, 00:15:21.447 "num_base_bdevs_discovered": 3, 00:15:21.447 "num_base_bdevs_operational": 3, 00:15:21.447 "process": { 00:15:21.447 "type": "rebuild", 00:15:21.447 "target": "spare", 00:15:21.447 "progress": { 00:15:21.447 "blocks": 14336, 00:15:21.447 "percent": 22 00:15:21.447 } 00:15:21.447 }, 00:15:21.447 "base_bdevs_list": [ 00:15:21.447 { 00:15:21.447 "name": "spare", 00:15:21.447 "uuid": "9b6eb536-f567-5311-a991-ea47128d2980", 00:15:21.447 "is_configured": true, 00:15:21.447 "data_offset": 2048, 00:15:21.447 "data_size": 63488 00:15:21.447 }, 00:15:21.447 { 00:15:21.447 "name": null, 00:15:21.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.447 "is_configured": false, 00:15:21.447 "data_offset": 0, 00:15:21.447 "data_size": 63488 00:15:21.447 }, 00:15:21.447 { 00:15:21.447 "name": "BaseBdev3", 00:15:21.447 "uuid": "17fcbab6-826d-5b86-888c-f670ab0b378d", 00:15:21.447 "is_configured": true, 00:15:21.447 "data_offset": 2048, 00:15:21.447 "data_size": 63488 00:15:21.447 }, 00:15:21.447 { 00:15:21.447 "name": "BaseBdev4", 00:15:21.447 "uuid": "fbc5b63b-82b9-54f6-b48d-a0c1727287f7", 00:15:21.447 "is_configured": true, 00:15:21.447 "data_offset": 2048, 00:15:21.447 "data_size": 63488 00:15:21.447 } 00:15:21.447 ] 00:15:21.447 }' 00:15:21.447 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.447 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:21.447 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.447 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:21.447 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=511 00:15:21.447 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:21.447 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:21.447 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.447 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:21.447 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:21.447 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.447 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.447 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.447 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.447 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.447 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.447 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.447 "name": "raid_bdev1", 00:15:21.447 "uuid": "579c7d2b-6242-47a8-a86d-024c4f6b67da", 00:15:21.447 "strip_size_kb": 0, 00:15:21.447 "state": "online", 00:15:21.447 "raid_level": "raid1", 00:15:21.447 "superblock": true, 00:15:21.447 "num_base_bdevs": 4, 00:15:21.447 "num_base_bdevs_discovered": 3, 00:15:21.447 "num_base_bdevs_operational": 3, 00:15:21.447 "process": { 00:15:21.447 "type": "rebuild", 00:15:21.447 "target": "spare", 00:15:21.447 "progress": { 00:15:21.447 "blocks": 14336, 00:15:21.447 "percent": 22 00:15:21.447 } 00:15:21.447 }, 00:15:21.447 "base_bdevs_list": [ 00:15:21.447 { 00:15:21.447 "name": "spare", 00:15:21.447 "uuid": "9b6eb536-f567-5311-a991-ea47128d2980", 00:15:21.447 "is_configured": true, 00:15:21.447 "data_offset": 2048, 00:15:21.447 "data_size": 63488 00:15:21.447 }, 00:15:21.447 { 00:15:21.447 "name": null, 00:15:21.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.447 "is_configured": false, 00:15:21.447 "data_offset": 0, 00:15:21.447 "data_size": 63488 00:15:21.447 }, 00:15:21.447 { 00:15:21.447 "name": "BaseBdev3", 00:15:21.447 "uuid": "17fcbab6-826d-5b86-888c-f670ab0b378d", 00:15:21.447 "is_configured": true, 00:15:21.447 "data_offset": 2048, 00:15:21.447 "data_size": 63488 00:15:21.447 }, 00:15:21.447 { 00:15:21.447 "name": "BaseBdev4", 00:15:21.447 "uuid": "fbc5b63b-82b9-54f6-b48d-a0c1727287f7", 00:15:21.447 "is_configured": true, 00:15:21.447 "data_offset": 2048, 00:15:21.447 "data_size": 63488 00:15:21.447 } 00:15:21.447 ] 00:15:21.447 }' 00:15:21.447 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.447 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:21.447 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:21.707 111.75 IOPS, 335.25 MiB/s [2024-10-09T03:18:05.010Z] [2024-10-09 03:18:04.762007] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:21.707 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:21.707 03:18:04 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:21.707 [2024-10-09 03:18:04.983803] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:21.707 [2024-10-09 03:18:04.984370] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:21.967 [2024-10-09 03:18:05.203186] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:22.537 [2024-10-09 03:18:05.679740] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:15:22.537 104.40 IOPS, 313.20 MiB/s [2024-10-09T03:18:05.840Z] 03:18:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:22.537 03:18:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:22.537 03:18:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:22.537 03:18:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:22.537 03:18:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:22.537 03:18:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:22.537 03:18:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:22.537 03:18:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.537 03:18:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.537 03:18:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.537 03:18:05 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.798 03:18:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:22.798 "name": "raid_bdev1", 00:15:22.798 "uuid": "579c7d2b-6242-47a8-a86d-024c4f6b67da", 00:15:22.798 "strip_size_kb": 0, 00:15:22.798 "state": "online", 00:15:22.798 "raid_level": "raid1", 00:15:22.798 "superblock": true, 00:15:22.798 "num_base_bdevs": 4, 00:15:22.798 "num_base_bdevs_discovered": 3, 00:15:22.798 "num_base_bdevs_operational": 3, 00:15:22.798 "process": { 00:15:22.798 "type": "rebuild", 00:15:22.798 "target": "spare", 00:15:22.798 "progress": { 00:15:22.798 "blocks": 28672, 00:15:22.798 "percent": 45 00:15:22.798 } 00:15:22.798 }, 00:15:22.798 "base_bdevs_list": [ 00:15:22.798 { 00:15:22.798 "name": "spare", 00:15:22.798 "uuid": "9b6eb536-f567-5311-a991-ea47128d2980", 00:15:22.798 "is_configured": true, 00:15:22.798 "data_offset": 2048, 00:15:22.798 "data_size": 63488 00:15:22.798 }, 00:15:22.798 { 00:15:22.798 "name": null, 00:15:22.798 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:22.798 "is_configured": false, 00:15:22.798 "data_offset": 0, 00:15:22.798 "data_size": 63488 00:15:22.798 }, 00:15:22.798 { 00:15:22.798 "name": "BaseBdev3", 00:15:22.798 "uuid": "17fcbab6-826d-5b86-888c-f670ab0b378d", 00:15:22.798 "is_configured": true, 00:15:22.798 "data_offset": 2048, 00:15:22.798 "data_size": 63488 00:15:22.798 }, 00:15:22.798 { 00:15:22.798 "name": "BaseBdev4", 00:15:22.798 "uuid": "fbc5b63b-82b9-54f6-b48d-a0c1727287f7", 00:15:22.798 "is_configured": true, 00:15:22.798 "data_offset": 2048, 00:15:22.798 "data_size": 63488 00:15:22.798 } 00:15:22.798 ] 00:15:22.798 }' 00:15:22.798 03:18:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:22.798 03:18:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:22.798 03:18:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.798 03:18:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:22.798 03:18:05 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:23.058 [2024-10-09 03:18:06.296538] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:23.319 [2024-10-09 03:18:06.517070] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:23.579 94.00 IOPS, 282.00 MiB/s [2024-10-09T03:18:06.882Z] [2024-10-09 03:18:06.841577] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:15:23.840 03:18:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:23.840 03:18:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:23.840 03:18:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.840 03:18:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:23.840 03:18:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:23.840 03:18:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.840 03:18:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.840 03:18:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.840 03:18:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.840 03:18:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.840 03:18:06 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.840 [2024-10-09 03:18:06.966789] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:15:23.840 03:18:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.840 "name": "raid_bdev1", 00:15:23.840 "uuid": "579c7d2b-6242-47a8-a86d-024c4f6b67da", 00:15:23.840 "strip_size_kb": 0, 00:15:23.840 "state": "online", 00:15:23.840 "raid_level": "raid1", 00:15:23.840 "superblock": true, 00:15:23.840 "num_base_bdevs": 4, 00:15:23.840 "num_base_bdevs_discovered": 3, 00:15:23.840 "num_base_bdevs_operational": 3, 00:15:23.840 "process": { 00:15:23.840 "type": "rebuild", 00:15:23.840 "target": "spare", 00:15:23.840 "progress": { 00:15:23.840 "blocks": 45056, 00:15:23.840 "percent": 70 00:15:23.840 } 00:15:23.840 }, 00:15:23.840 "base_bdevs_list": [ 00:15:23.840 { 00:15:23.840 "name": "spare", 00:15:23.840 "uuid": "9b6eb536-f567-5311-a991-ea47128d2980", 00:15:23.840 "is_configured": true, 00:15:23.840 "data_offset": 2048, 00:15:23.840 "data_size": 63488 00:15:23.840 }, 00:15:23.840 { 00:15:23.840 "name": null, 00:15:23.840 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.840 "is_configured": false, 00:15:23.840 "data_offset": 0, 00:15:23.840 "data_size": 63488 00:15:23.840 }, 00:15:23.840 { 00:15:23.840 "name": "BaseBdev3", 00:15:23.840 "uuid": "17fcbab6-826d-5b86-888c-f670ab0b378d", 00:15:23.840 "is_configured": true, 00:15:23.840 "data_offset": 2048, 00:15:23.840 "data_size": 63488 00:15:23.840 }, 00:15:23.840 { 00:15:23.840 "name": "BaseBdev4", 00:15:23.840 "uuid": "fbc5b63b-82b9-54f6-b48d-a0c1727287f7", 00:15:23.840 "is_configured": true, 00:15:23.840 "data_offset": 2048, 00:15:23.840 "data_size": 63488 00:15:23.840 } 00:15:23.840 ] 00:15:23.840 }' 00:15:23.840 03:18:06 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.840 03:18:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:23.840 03:18:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.840 03:18:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:23.840 03:18:07 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:24.100 [2024-10-09 03:18:07.289172] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:15:24.361 [2024-10-09 03:18:07.515738] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:15:24.908 85.43 IOPS, 256.29 MiB/s [2024-10-09T03:18:08.211Z] 03:18:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:24.908 03:18:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:24.908 03:18:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:24.908 03:18:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:24.908 03:18:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:24.908 03:18:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:24.908 03:18:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:24.908 03:18:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:24.908 03:18:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.908 03:18:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:24.908 03:18:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.908 03:18:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:24.908 "name": "raid_bdev1", 00:15:24.908 "uuid": "579c7d2b-6242-47a8-a86d-024c4f6b67da", 00:15:24.908 "strip_size_kb": 0, 00:15:24.908 "state": "online", 00:15:24.908 "raid_level": "raid1", 00:15:24.908 "superblock": true, 00:15:24.908 "num_base_bdevs": 4, 00:15:24.908 "num_base_bdevs_discovered": 3, 00:15:24.908 "num_base_bdevs_operational": 3, 00:15:24.908 "process": { 00:15:24.908 "type": "rebuild", 00:15:24.908 "target": "spare", 00:15:24.908 "progress": { 00:15:24.908 "blocks": 59392, 00:15:24.908 "percent": 93 00:15:24.908 } 00:15:24.908 }, 00:15:24.908 "base_bdevs_list": [ 00:15:24.908 { 00:15:24.908 "name": "spare", 00:15:24.908 "uuid": "9b6eb536-f567-5311-a991-ea47128d2980", 00:15:24.908 "is_configured": true, 00:15:24.908 "data_offset": 2048, 00:15:24.908 "data_size": 63488 00:15:24.908 }, 00:15:24.908 { 00:15:24.908 "name": null, 00:15:24.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.908 "is_configured": false, 00:15:24.908 "data_offset": 0, 00:15:24.908 "data_size": 63488 00:15:24.908 }, 00:15:24.908 { 00:15:24.908 "name": "BaseBdev3", 00:15:24.908 "uuid": "17fcbab6-826d-5b86-888c-f670ab0b378d", 00:15:24.908 "is_configured": true, 00:15:24.908 "data_offset": 2048, 00:15:24.908 "data_size": 63488 00:15:24.908 }, 00:15:24.908 { 00:15:24.908 "name": "BaseBdev4", 00:15:24.908 "uuid": "fbc5b63b-82b9-54f6-b48d-a0c1727287f7", 00:15:24.908 "is_configured": true, 00:15:24.908 "data_offset": 2048, 00:15:24.908 "data_size": 63488 00:15:24.908 } 00:15:24.908 ] 00:15:24.908 }' 00:15:24.908 03:18:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:24.908 03:18:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:24.908 03:18:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:24.908 [2024-10-09 03:18:08.187161] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:24.908 03:18:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:24.908 03:18:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:25.168 [2024-10-09 03:18:08.278601] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:25.168 [2024-10-09 03:18:08.281105] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:25.997 79.38 IOPS, 238.12 MiB/s [2024-10-09T03:18:09.300Z] 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:25.997 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:25.997 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:25.997 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:25.997 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:25.997 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:25.997 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:25.997 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:25.997 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.997 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.997 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.997 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:25.997 "name": "raid_bdev1", 00:15:25.997 "uuid": "579c7d2b-6242-47a8-a86d-024c4f6b67da", 00:15:25.997 "strip_size_kb": 0, 00:15:25.997 "state": "online", 00:15:25.997 "raid_level": "raid1", 00:15:25.997 "superblock": true, 00:15:25.997 "num_base_bdevs": 4, 00:15:25.997 "num_base_bdevs_discovered": 3, 00:15:25.997 "num_base_bdevs_operational": 3, 00:15:25.997 "base_bdevs_list": [ 00:15:25.997 { 00:15:25.997 "name": "spare", 00:15:25.997 "uuid": "9b6eb536-f567-5311-a991-ea47128d2980", 00:15:25.997 "is_configured": true, 00:15:25.997 "data_offset": 2048, 00:15:25.997 "data_size": 63488 00:15:25.997 }, 00:15:25.997 { 00:15:25.997 "name": null, 00:15:25.997 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:25.997 "is_configured": false, 00:15:25.997 "data_offset": 0, 00:15:25.997 "data_size": 63488 00:15:25.997 }, 00:15:25.997 { 00:15:25.997 "name": "BaseBdev3", 00:15:25.997 "uuid": "17fcbab6-826d-5b86-888c-f670ab0b378d", 00:15:25.997 "is_configured": true, 00:15:25.997 "data_offset": 2048, 00:15:25.997 "data_size": 63488 00:15:25.997 }, 00:15:25.997 { 00:15:25.997 "name": "BaseBdev4", 00:15:25.997 "uuid": "fbc5b63b-82b9-54f6-b48d-a0c1727287f7", 00:15:25.997 "is_configured": true, 00:15:25.997 "data_offset": 2048, 00:15:25.997 "data_size": 63488 00:15:25.997 } 00:15:25.997 ] 00:15:25.997 }' 00:15:25.997 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:25.997 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:25.997 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.257 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:26.257 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:15:26.257 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:26.257 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:26.257 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:26.257 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:26.257 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:26.257 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.257 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.257 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.257 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.257 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.257 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:26.257 "name": "raid_bdev1", 00:15:26.257 "uuid": "579c7d2b-6242-47a8-a86d-024c4f6b67da", 00:15:26.257 "strip_size_kb": 0, 00:15:26.257 "state": "online", 00:15:26.257 "raid_level": "raid1", 00:15:26.257 "superblock": true, 00:15:26.257 "num_base_bdevs": 4, 00:15:26.257 "num_base_bdevs_discovered": 3, 00:15:26.257 "num_base_bdevs_operational": 3, 00:15:26.257 "base_bdevs_list": [ 00:15:26.257 { 00:15:26.257 "name": "spare", 00:15:26.257 "uuid": "9b6eb536-f567-5311-a991-ea47128d2980", 00:15:26.257 "is_configured": true, 00:15:26.257 "data_offset": 2048, 00:15:26.257 "data_size": 63488 00:15:26.257 }, 00:15:26.257 { 00:15:26.257 "name": null, 00:15:26.257 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.257 "is_configured": false, 00:15:26.257 "data_offset": 0, 00:15:26.257 "data_size": 63488 00:15:26.257 }, 00:15:26.257 { 00:15:26.257 "name": "BaseBdev3", 00:15:26.257 "uuid": "17fcbab6-826d-5b86-888c-f670ab0b378d", 00:15:26.257 "is_configured": true, 00:15:26.257 "data_offset": 2048, 00:15:26.257 "data_size": 63488 00:15:26.257 }, 00:15:26.257 { 00:15:26.257 "name": "BaseBdev4", 00:15:26.257 "uuid": "fbc5b63b-82b9-54f6-b48d-a0c1727287f7", 00:15:26.257 "is_configured": true, 00:15:26.257 "data_offset": 2048, 00:15:26.257 "data_size": 63488 00:15:26.257 } 00:15:26.257 ] 00:15:26.257 }' 00:15:26.257 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:26.257 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:26.257 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:26.257 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:26.257 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:26.257 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:26.257 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.257 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:26.257 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:26.258 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:26.258 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.258 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.258 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.258 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.258 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.258 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.258 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.258 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.258 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.258 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.258 "name": "raid_bdev1", 00:15:26.258 "uuid": "579c7d2b-6242-47a8-a86d-024c4f6b67da", 00:15:26.258 "strip_size_kb": 0, 00:15:26.258 "state": "online", 00:15:26.258 "raid_level": "raid1", 00:15:26.258 "superblock": true, 00:15:26.258 "num_base_bdevs": 4, 00:15:26.258 "num_base_bdevs_discovered": 3, 00:15:26.258 "num_base_bdevs_operational": 3, 00:15:26.258 "base_bdevs_list": [ 00:15:26.258 { 00:15:26.258 "name": "spare", 00:15:26.258 "uuid": "9b6eb536-f567-5311-a991-ea47128d2980", 00:15:26.258 "is_configured": true, 00:15:26.258 "data_offset": 2048, 00:15:26.258 "data_size": 63488 00:15:26.258 }, 00:15:26.258 { 00:15:26.258 "name": null, 00:15:26.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.258 "is_configured": false, 00:15:26.258 "data_offset": 0, 00:15:26.258 "data_size": 63488 00:15:26.258 }, 00:15:26.258 { 00:15:26.258 "name": "BaseBdev3", 00:15:26.258 "uuid": "17fcbab6-826d-5b86-888c-f670ab0b378d", 00:15:26.258 "is_configured": true, 00:15:26.258 "data_offset": 2048, 00:15:26.258 "data_size": 63488 00:15:26.258 }, 00:15:26.258 { 00:15:26.258 "name": "BaseBdev4", 00:15:26.258 "uuid": "fbc5b63b-82b9-54f6-b48d-a0c1727287f7", 00:15:26.258 "is_configured": true, 00:15:26.258 "data_offset": 2048, 00:15:26.258 "data_size": 63488 00:15:26.258 } 00:15:26.258 ] 00:15:26.258 }' 00:15:26.258 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.258 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.777 74.44 IOPS, 223.33 MiB/s [2024-10-09T03:18:10.080Z] 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:26.777 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.777 03:18:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.777 [2024-10-09 03:18:09.938027] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:26.777 [2024-10-09 03:18:09.938083] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:26.777 00:15:26.777 Latency(us) 00:15:26.777 [2024-10-09T03:18:10.080Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:26.777 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:26.777 raid_bdev1 : 9.27 73.21 219.63 0.00 0.00 18447.84 325.53 115847.04 00:15:26.777 [2024-10-09T03:18:10.080Z] =================================================================================================================== 00:15:26.777 [2024-10-09T03:18:10.080Z] Total : 73.21 219.63 0.00 0.00 18447.84 325.53 115847.04 00:15:26.777 [2024-10-09 03:18:10.042349] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.777 [2024-10-09 03:18:10.042403] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:26.777 [2024-10-09 03:18:10.042505] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:26.777 [2024-10-09 03:18:10.042519] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:26.777 { 00:15:26.777 "results": [ 00:15:26.777 { 00:15:26.777 "job": "raid_bdev1", 00:15:26.777 "core_mask": "0x1", 00:15:26.777 "workload": "randrw", 00:15:26.777 "percentage": 50, 00:15:26.777 "status": "finished", 00:15:26.777 "queue_depth": 2, 00:15:26.777 "io_size": 3145728, 00:15:26.777 "runtime": 9.274691, 00:15:26.777 "iops": 73.2099861871409, 00:15:26.777 "mibps": 219.6299585614227, 00:15:26.777 "io_failed": 0, 00:15:26.777 "io_timeout": 0, 00:15:26.777 "avg_latency_us": 18447.835710105406, 00:15:26.777 "min_latency_us": 325.5336244541485, 00:15:26.777 "max_latency_us": 115847.04279475982 00:15:26.777 } 00:15:26.777 ], 00:15:26.777 "core_count": 1 00:15:26.777 } 00:15:26.777 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.777 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.777 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:26.777 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.777 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:26.777 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.037 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:27.037 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:27.037 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:27.038 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:27.038 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:27.038 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:27.038 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:27.038 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:27.038 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:27.038 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:27.038 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:27.038 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:27.038 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:27.038 /dev/nbd0 00:15:27.038 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:27.038 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:27.038 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:15:27.038 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:15:27.038 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:27.038 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:27.038 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:15:27.038 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:15:27.038 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:27.038 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:27.038 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:27.038 1+0 records in 00:15:27.038 1+0 records out 00:15:27.038 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000283669 s, 14.4 MB/s 00:15:27.038 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:27.038 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:15:27.038 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:27.298 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:27.298 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:15:27.298 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:27.298 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:27.298 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:27.298 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:15:27.298 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:15:27.298 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:27.298 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:15:27.298 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:15:27.298 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:27.298 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:15:27.298 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:27.298 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:27.298 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:27.298 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:27.298 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:27.298 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:27.298 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:15:27.298 /dev/nbd1 00:15:27.298 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:27.298 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:27.298 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:27.298 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:15:27.298 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:27.298 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:27.298 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:27.298 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:15:27.298 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:27.298 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:27.298 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:27.558 1+0 records in 00:15:27.558 1+0 records out 00:15:27.558 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039542 s, 10.4 MB/s 00:15:27.558 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:27.558 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:15:27.558 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:27.558 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:27.558 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:15:27.558 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:27.558 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:27.558 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:27.558 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:27.558 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:27.558 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:27.558 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:27.558 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:27.558 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:27.558 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:27.818 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:27.818 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:27.818 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:27.818 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:27.818 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:27.818 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:27.818 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:27.818 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:27.818 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:27.818 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:15:27.818 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:15:27.818 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:27.818 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:15:27.818 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:27.818 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:27.818 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:27.818 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:27.818 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:27.818 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:27.818 03:18:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:15:28.078 /dev/nbd1 00:15:28.078 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:28.078 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:28.078 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:15:28.078 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@869 -- # local i 00:15:28.078 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:15:28.078 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:15:28.078 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:15:28.078 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # break 00:15:28.078 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:15:28.078 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:15:28.078 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:28.078 1+0 records in 00:15:28.078 1+0 records out 00:15:28.078 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278899 s, 14.7 MB/s 00:15:28.078 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.078 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@886 -- # size=4096 00:15:28.078 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:28.078 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:15:28.079 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # return 0 00:15:28.079 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:28.079 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:28.079 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:28.079 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:28.079 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:28.079 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:28.079 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:28.079 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:28.079 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:28.079 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:28.339 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:28.339 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:28.339 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:28.339 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:28.339 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:28.339 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:28.339 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:28.339 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:28.339 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:28.339 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:28.339 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:28.339 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:28.339 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:28.339 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:28.339 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:28.599 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:28.599 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:28.599 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:28.599 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:28.599 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:28.599 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:28.599 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:28.599 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:28.599 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:28.599 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:28.599 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.599 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.599 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.599 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:28.599 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.599 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.599 [2024-10-09 03:18:11.758984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:28.599 [2024-10-09 03:18:11.759063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:28.599 [2024-10-09 03:18:11.759091] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:15:28.599 [2024-10-09 03:18:11.759106] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:28.599 [2024-10-09 03:18:11.762129] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:28.599 [2024-10-09 03:18:11.762178] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:28.599 [2024-10-09 03:18:11.762287] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:28.599 [2024-10-09 03:18:11.762382] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:28.599 [2024-10-09 03:18:11.762581] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:28.599 [2024-10-09 03:18:11.762731] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:28.599 spare 00:15:28.599 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.599 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:28.599 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.599 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.599 [2024-10-09 03:18:11.862652] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:28.599 [2024-10-09 03:18:11.862682] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:28.599 [2024-10-09 03:18:11.862977] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:15:28.599 [2024-10-09 03:18:11.863136] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:28.599 [2024-10-09 03:18:11.863152] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:28.599 [2024-10-09 03:18:11.863306] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:28.599 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.599 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:28.599 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:28.599 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:28.599 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:28.599 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:28.599 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:28.599 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:28.599 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:28.599 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:28.599 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:28.599 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:28.599 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:28.599 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.599 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:28.599 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.859 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:28.860 "name": "raid_bdev1", 00:15:28.860 "uuid": "579c7d2b-6242-47a8-a86d-024c4f6b67da", 00:15:28.860 "strip_size_kb": 0, 00:15:28.860 "state": "online", 00:15:28.860 "raid_level": "raid1", 00:15:28.860 "superblock": true, 00:15:28.860 "num_base_bdevs": 4, 00:15:28.860 "num_base_bdevs_discovered": 3, 00:15:28.860 "num_base_bdevs_operational": 3, 00:15:28.860 "base_bdevs_list": [ 00:15:28.860 { 00:15:28.860 "name": "spare", 00:15:28.860 "uuid": "9b6eb536-f567-5311-a991-ea47128d2980", 00:15:28.860 "is_configured": true, 00:15:28.860 "data_offset": 2048, 00:15:28.860 "data_size": 63488 00:15:28.860 }, 00:15:28.860 { 00:15:28.860 "name": null, 00:15:28.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.860 "is_configured": false, 00:15:28.860 "data_offset": 2048, 00:15:28.860 "data_size": 63488 00:15:28.860 }, 00:15:28.860 { 00:15:28.860 "name": "BaseBdev3", 00:15:28.860 "uuid": "17fcbab6-826d-5b86-888c-f670ab0b378d", 00:15:28.860 "is_configured": true, 00:15:28.860 "data_offset": 2048, 00:15:28.860 "data_size": 63488 00:15:28.860 }, 00:15:28.860 { 00:15:28.860 "name": "BaseBdev4", 00:15:28.860 "uuid": "fbc5b63b-82b9-54f6-b48d-a0c1727287f7", 00:15:28.860 "is_configured": true, 00:15:28.860 "data_offset": 2048, 00:15:28.860 "data_size": 63488 00:15:28.860 } 00:15:28.860 ] 00:15:28.860 }' 00:15:28.860 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:28.860 03:18:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.120 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:29.120 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:29.120 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:29.120 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:29.120 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:29.120 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.120 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.120 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.120 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.120 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.120 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:29.120 "name": "raid_bdev1", 00:15:29.120 "uuid": "579c7d2b-6242-47a8-a86d-024c4f6b67da", 00:15:29.120 "strip_size_kb": 0, 00:15:29.120 "state": "online", 00:15:29.120 "raid_level": "raid1", 00:15:29.120 "superblock": true, 00:15:29.120 "num_base_bdevs": 4, 00:15:29.120 "num_base_bdevs_discovered": 3, 00:15:29.120 "num_base_bdevs_operational": 3, 00:15:29.120 "base_bdevs_list": [ 00:15:29.120 { 00:15:29.120 "name": "spare", 00:15:29.120 "uuid": "9b6eb536-f567-5311-a991-ea47128d2980", 00:15:29.120 "is_configured": true, 00:15:29.120 "data_offset": 2048, 00:15:29.120 "data_size": 63488 00:15:29.120 }, 00:15:29.120 { 00:15:29.120 "name": null, 00:15:29.120 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.120 "is_configured": false, 00:15:29.120 "data_offset": 2048, 00:15:29.120 "data_size": 63488 00:15:29.120 }, 00:15:29.120 { 00:15:29.120 "name": "BaseBdev3", 00:15:29.120 "uuid": "17fcbab6-826d-5b86-888c-f670ab0b378d", 00:15:29.120 "is_configured": true, 00:15:29.120 "data_offset": 2048, 00:15:29.120 "data_size": 63488 00:15:29.120 }, 00:15:29.120 { 00:15:29.120 "name": "BaseBdev4", 00:15:29.120 "uuid": "fbc5b63b-82b9-54f6-b48d-a0c1727287f7", 00:15:29.120 "is_configured": true, 00:15:29.120 "data_offset": 2048, 00:15:29.120 "data_size": 63488 00:15:29.120 } 00:15:29.120 ] 00:15:29.120 }' 00:15:29.120 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:29.120 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:29.120 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:29.505 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:29.505 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.505 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:29.505 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.505 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.505 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.505 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:29.505 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:29.505 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.505 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.505 [2024-10-09 03:18:12.482302] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:29.505 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.505 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:29.505 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:29.505 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:29.505 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:29.505 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:29.505 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:29.505 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:29.505 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:29.505 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:29.505 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:29.505 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:29.505 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.505 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.505 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:29.505 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.505 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:29.505 "name": "raid_bdev1", 00:15:29.505 "uuid": "579c7d2b-6242-47a8-a86d-024c4f6b67da", 00:15:29.505 "strip_size_kb": 0, 00:15:29.505 "state": "online", 00:15:29.505 "raid_level": "raid1", 00:15:29.505 "superblock": true, 00:15:29.505 "num_base_bdevs": 4, 00:15:29.505 "num_base_bdevs_discovered": 2, 00:15:29.505 "num_base_bdevs_operational": 2, 00:15:29.505 "base_bdevs_list": [ 00:15:29.505 { 00:15:29.505 "name": null, 00:15:29.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.505 "is_configured": false, 00:15:29.505 "data_offset": 0, 00:15:29.505 "data_size": 63488 00:15:29.505 }, 00:15:29.505 { 00:15:29.505 "name": null, 00:15:29.505 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:29.505 "is_configured": false, 00:15:29.505 "data_offset": 2048, 00:15:29.505 "data_size": 63488 00:15:29.505 }, 00:15:29.505 { 00:15:29.505 "name": "BaseBdev3", 00:15:29.505 "uuid": "17fcbab6-826d-5b86-888c-f670ab0b378d", 00:15:29.505 "is_configured": true, 00:15:29.505 "data_offset": 2048, 00:15:29.505 "data_size": 63488 00:15:29.505 }, 00:15:29.505 { 00:15:29.505 "name": "BaseBdev4", 00:15:29.505 "uuid": "fbc5b63b-82b9-54f6-b48d-a0c1727287f7", 00:15:29.505 "is_configured": true, 00:15:29.505 "data_offset": 2048, 00:15:29.505 "data_size": 63488 00:15:29.505 } 00:15:29.505 ] 00:15:29.505 }' 00:15:29.505 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:29.505 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.770 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:29.770 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.770 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:29.770 [2024-10-09 03:18:12.885908] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:29.770 [2024-10-09 03:18:12.886076] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:29.770 [2024-10-09 03:18:12.886095] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:29.770 [2024-10-09 03:18:12.886161] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:29.770 [2024-10-09 03:18:12.899420] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:15:29.770 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.770 03:18:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:29.770 [2024-10-09 03:18:12.901533] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:30.709 03:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:30.709 03:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:30.709 03:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:30.709 03:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:30.709 03:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:30.709 03:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.709 03:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.709 03:18:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.709 03:18:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.709 03:18:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.709 03:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:30.709 "name": "raid_bdev1", 00:15:30.709 "uuid": "579c7d2b-6242-47a8-a86d-024c4f6b67da", 00:15:30.709 "strip_size_kb": 0, 00:15:30.709 "state": "online", 00:15:30.709 "raid_level": "raid1", 00:15:30.709 "superblock": true, 00:15:30.709 "num_base_bdevs": 4, 00:15:30.709 "num_base_bdevs_discovered": 3, 00:15:30.709 "num_base_bdevs_operational": 3, 00:15:30.709 "process": { 00:15:30.709 "type": "rebuild", 00:15:30.709 "target": "spare", 00:15:30.709 "progress": { 00:15:30.709 "blocks": 20480, 00:15:30.709 "percent": 32 00:15:30.709 } 00:15:30.709 }, 00:15:30.709 "base_bdevs_list": [ 00:15:30.709 { 00:15:30.709 "name": "spare", 00:15:30.709 "uuid": "9b6eb536-f567-5311-a991-ea47128d2980", 00:15:30.709 "is_configured": true, 00:15:30.709 "data_offset": 2048, 00:15:30.709 "data_size": 63488 00:15:30.709 }, 00:15:30.709 { 00:15:30.709 "name": null, 00:15:30.709 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.709 "is_configured": false, 00:15:30.709 "data_offset": 2048, 00:15:30.709 "data_size": 63488 00:15:30.709 }, 00:15:30.709 { 00:15:30.709 "name": "BaseBdev3", 00:15:30.709 "uuid": "17fcbab6-826d-5b86-888c-f670ab0b378d", 00:15:30.709 "is_configured": true, 00:15:30.709 "data_offset": 2048, 00:15:30.709 "data_size": 63488 00:15:30.709 }, 00:15:30.709 { 00:15:30.709 "name": "BaseBdev4", 00:15:30.709 "uuid": "fbc5b63b-82b9-54f6-b48d-a0c1727287f7", 00:15:30.709 "is_configured": true, 00:15:30.709 "data_offset": 2048, 00:15:30.709 "data_size": 63488 00:15:30.709 } 00:15:30.709 ] 00:15:30.709 }' 00:15:30.709 03:18:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:30.709 03:18:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:30.709 03:18:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:30.968 03:18:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:30.968 03:18:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:30.968 03:18:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.968 03:18:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.968 [2024-10-09 03:18:14.049456] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:30.968 [2024-10-09 03:18:14.110080] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:30.968 [2024-10-09 03:18:14.110145] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.968 [2024-10-09 03:18:14.110162] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:30.968 [2024-10-09 03:18:14.110172] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:30.968 03:18:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.968 03:18:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:30.968 03:18:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:30.968 03:18:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:30.968 03:18:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:30.968 03:18:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:30.968 03:18:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:30.968 03:18:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:30.968 03:18:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:30.968 03:18:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:30.968 03:18:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:30.968 03:18:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:30.968 03:18:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:30.968 03:18:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.968 03:18:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:30.969 03:18:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.969 03:18:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:30.969 "name": "raid_bdev1", 00:15:30.969 "uuid": "579c7d2b-6242-47a8-a86d-024c4f6b67da", 00:15:30.969 "strip_size_kb": 0, 00:15:30.969 "state": "online", 00:15:30.969 "raid_level": "raid1", 00:15:30.969 "superblock": true, 00:15:30.969 "num_base_bdevs": 4, 00:15:30.969 "num_base_bdevs_discovered": 2, 00:15:30.969 "num_base_bdevs_operational": 2, 00:15:30.969 "base_bdevs_list": [ 00:15:30.969 { 00:15:30.969 "name": null, 00:15:30.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.969 "is_configured": false, 00:15:30.969 "data_offset": 0, 00:15:30.969 "data_size": 63488 00:15:30.969 }, 00:15:30.969 { 00:15:30.969 "name": null, 00:15:30.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.969 "is_configured": false, 00:15:30.969 "data_offset": 2048, 00:15:30.969 "data_size": 63488 00:15:30.969 }, 00:15:30.969 { 00:15:30.969 "name": "BaseBdev3", 00:15:30.969 "uuid": "17fcbab6-826d-5b86-888c-f670ab0b378d", 00:15:30.969 "is_configured": true, 00:15:30.969 "data_offset": 2048, 00:15:30.969 "data_size": 63488 00:15:30.969 }, 00:15:30.969 { 00:15:30.969 "name": "BaseBdev4", 00:15:30.969 "uuid": "fbc5b63b-82b9-54f6-b48d-a0c1727287f7", 00:15:30.969 "is_configured": true, 00:15:30.969 "data_offset": 2048, 00:15:30.969 "data_size": 63488 00:15:30.969 } 00:15:30.969 ] 00:15:30.969 }' 00:15:30.969 03:18:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:30.969 03:18:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:31.538 03:18:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:31.538 03:18:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.538 03:18:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:31.538 [2024-10-09 03:18:14.593717] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:31.538 [2024-10-09 03:18:14.593795] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:31.538 [2024-10-09 03:18:14.593824] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:15:31.538 [2024-10-09 03:18:14.593850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:31.538 [2024-10-09 03:18:14.594388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:31.538 [2024-10-09 03:18:14.594424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:31.538 [2024-10-09 03:18:14.594521] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:31.538 [2024-10-09 03:18:14.594541] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:15:31.538 [2024-10-09 03:18:14.594553] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:31.538 [2024-10-09 03:18:14.594587] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:31.538 [2024-10-09 03:18:14.607342] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:15:31.538 spare 00:15:31.538 03:18:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.538 03:18:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:31.538 [2024-10-09 03:18:14.609446] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:32.477 03:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:32.477 03:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:32.477 03:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:32.477 03:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:32.477 03:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:32.477 03:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.477 03:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.477 03:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.477 03:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:32.477 03:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.477 03:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:32.477 "name": "raid_bdev1", 00:15:32.477 "uuid": "579c7d2b-6242-47a8-a86d-024c4f6b67da", 00:15:32.477 "strip_size_kb": 0, 00:15:32.477 "state": "online", 00:15:32.477 "raid_level": "raid1", 00:15:32.477 "superblock": true, 00:15:32.477 "num_base_bdevs": 4, 00:15:32.477 "num_base_bdevs_discovered": 3, 00:15:32.477 "num_base_bdevs_operational": 3, 00:15:32.477 "process": { 00:15:32.477 "type": "rebuild", 00:15:32.477 "target": "spare", 00:15:32.477 "progress": { 00:15:32.477 "blocks": 20480, 00:15:32.477 "percent": 32 00:15:32.477 } 00:15:32.477 }, 00:15:32.477 "base_bdevs_list": [ 00:15:32.477 { 00:15:32.477 "name": "spare", 00:15:32.477 "uuid": "9b6eb536-f567-5311-a991-ea47128d2980", 00:15:32.477 "is_configured": true, 00:15:32.477 "data_offset": 2048, 00:15:32.477 "data_size": 63488 00:15:32.477 }, 00:15:32.477 { 00:15:32.477 "name": null, 00:15:32.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.477 "is_configured": false, 00:15:32.477 "data_offset": 2048, 00:15:32.477 "data_size": 63488 00:15:32.477 }, 00:15:32.477 { 00:15:32.477 "name": "BaseBdev3", 00:15:32.477 "uuid": "17fcbab6-826d-5b86-888c-f670ab0b378d", 00:15:32.477 "is_configured": true, 00:15:32.477 "data_offset": 2048, 00:15:32.477 "data_size": 63488 00:15:32.477 }, 00:15:32.477 { 00:15:32.477 "name": "BaseBdev4", 00:15:32.477 "uuid": "fbc5b63b-82b9-54f6-b48d-a0c1727287f7", 00:15:32.477 "is_configured": true, 00:15:32.477 "data_offset": 2048, 00:15:32.477 "data_size": 63488 00:15:32.477 } 00:15:32.477 ] 00:15:32.477 }' 00:15:32.477 03:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:32.477 03:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:32.477 03:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:32.477 03:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:32.477 03:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:32.477 03:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.477 03:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:32.477 [2024-10-09 03:18:15.749445] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:32.747 [2024-10-09 03:18:15.818090] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:32.747 [2024-10-09 03:18:15.818151] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:32.747 [2024-10-09 03:18:15.818171] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:32.747 [2024-10-09 03:18:15.818189] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:32.747 03:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.747 03:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:32.747 03:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:32.747 03:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:32.747 03:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:32.747 03:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:32.747 03:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:32.747 03:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:32.747 03:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:32.747 03:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:32.747 03:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:32.747 03:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:32.747 03:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:32.747 03:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.747 03:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:32.747 03:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.747 03:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:32.748 "name": "raid_bdev1", 00:15:32.748 "uuid": "579c7d2b-6242-47a8-a86d-024c4f6b67da", 00:15:32.748 "strip_size_kb": 0, 00:15:32.748 "state": "online", 00:15:32.748 "raid_level": "raid1", 00:15:32.748 "superblock": true, 00:15:32.748 "num_base_bdevs": 4, 00:15:32.748 "num_base_bdevs_discovered": 2, 00:15:32.748 "num_base_bdevs_operational": 2, 00:15:32.748 "base_bdevs_list": [ 00:15:32.748 { 00:15:32.748 "name": null, 00:15:32.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.748 "is_configured": false, 00:15:32.748 "data_offset": 0, 00:15:32.748 "data_size": 63488 00:15:32.748 }, 00:15:32.748 { 00:15:32.748 "name": null, 00:15:32.748 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:32.748 "is_configured": false, 00:15:32.748 "data_offset": 2048, 00:15:32.748 "data_size": 63488 00:15:32.748 }, 00:15:32.748 { 00:15:32.748 "name": "BaseBdev3", 00:15:32.748 "uuid": "17fcbab6-826d-5b86-888c-f670ab0b378d", 00:15:32.748 "is_configured": true, 00:15:32.748 "data_offset": 2048, 00:15:32.748 "data_size": 63488 00:15:32.748 }, 00:15:32.748 { 00:15:32.748 "name": "BaseBdev4", 00:15:32.748 "uuid": "fbc5b63b-82b9-54f6-b48d-a0c1727287f7", 00:15:32.748 "is_configured": true, 00:15:32.748 "data_offset": 2048, 00:15:32.748 "data_size": 63488 00:15:32.748 } 00:15:32.748 ] 00:15:32.748 }' 00:15:32.748 03:18:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:32.748 03:18:15 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:33.012 03:18:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:33.012 03:18:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:33.012 03:18:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:33.012 03:18:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:33.012 03:18:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:33.012 03:18:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:33.012 03:18:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:33.012 03:18:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.012 03:18:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:33.012 03:18:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.271 03:18:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:33.271 "name": "raid_bdev1", 00:15:33.272 "uuid": "579c7d2b-6242-47a8-a86d-024c4f6b67da", 00:15:33.272 "strip_size_kb": 0, 00:15:33.272 "state": "online", 00:15:33.272 "raid_level": "raid1", 00:15:33.272 "superblock": true, 00:15:33.272 "num_base_bdevs": 4, 00:15:33.272 "num_base_bdevs_discovered": 2, 00:15:33.272 "num_base_bdevs_operational": 2, 00:15:33.272 "base_bdevs_list": [ 00:15:33.272 { 00:15:33.272 "name": null, 00:15:33.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.272 "is_configured": false, 00:15:33.272 "data_offset": 0, 00:15:33.272 "data_size": 63488 00:15:33.272 }, 00:15:33.272 { 00:15:33.272 "name": null, 00:15:33.272 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.272 "is_configured": false, 00:15:33.272 "data_offset": 2048, 00:15:33.272 "data_size": 63488 00:15:33.272 }, 00:15:33.272 { 00:15:33.272 "name": "BaseBdev3", 00:15:33.272 "uuid": "17fcbab6-826d-5b86-888c-f670ab0b378d", 00:15:33.272 "is_configured": true, 00:15:33.272 "data_offset": 2048, 00:15:33.272 "data_size": 63488 00:15:33.272 }, 00:15:33.272 { 00:15:33.272 "name": "BaseBdev4", 00:15:33.272 "uuid": "fbc5b63b-82b9-54f6-b48d-a0c1727287f7", 00:15:33.272 "is_configured": true, 00:15:33.272 "data_offset": 2048, 00:15:33.272 "data_size": 63488 00:15:33.272 } 00:15:33.272 ] 00:15:33.272 }' 00:15:33.272 03:18:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:33.272 03:18:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:33.272 03:18:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:33.272 03:18:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:33.272 03:18:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:33.272 03:18:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.272 03:18:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:33.272 03:18:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.272 03:18:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:33.272 03:18:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.272 03:18:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:33.272 [2024-10-09 03:18:16.441894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:33.272 [2024-10-09 03:18:16.441981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:33.272 [2024-10-09 03:18:16.442006] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:15:33.272 [2024-10-09 03:18:16.442016] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:33.272 [2024-10-09 03:18:16.442561] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:33.272 [2024-10-09 03:18:16.442586] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:33.272 [2024-10-09 03:18:16.442683] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:33.272 [2024-10-09 03:18:16.442700] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:33.272 [2024-10-09 03:18:16.442712] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:33.272 [2024-10-09 03:18:16.442726] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:33.272 BaseBdev1 00:15:33.272 03:18:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.272 03:18:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:34.209 03:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:34.209 03:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:34.209 03:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:34.209 03:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:34.209 03:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:34.209 03:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:34.209 03:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:34.209 03:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:34.209 03:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:34.209 03:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:34.209 03:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.209 03:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.209 03:18:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.209 03:18:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:34.209 03:18:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.209 03:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:34.209 "name": "raid_bdev1", 00:15:34.209 "uuid": "579c7d2b-6242-47a8-a86d-024c4f6b67da", 00:15:34.209 "strip_size_kb": 0, 00:15:34.209 "state": "online", 00:15:34.209 "raid_level": "raid1", 00:15:34.209 "superblock": true, 00:15:34.209 "num_base_bdevs": 4, 00:15:34.209 "num_base_bdevs_discovered": 2, 00:15:34.209 "num_base_bdevs_operational": 2, 00:15:34.209 "base_bdevs_list": [ 00:15:34.209 { 00:15:34.209 "name": null, 00:15:34.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.209 "is_configured": false, 00:15:34.209 "data_offset": 0, 00:15:34.209 "data_size": 63488 00:15:34.209 }, 00:15:34.209 { 00:15:34.209 "name": null, 00:15:34.209 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.209 "is_configured": false, 00:15:34.210 "data_offset": 2048, 00:15:34.210 "data_size": 63488 00:15:34.210 }, 00:15:34.210 { 00:15:34.210 "name": "BaseBdev3", 00:15:34.210 "uuid": "17fcbab6-826d-5b86-888c-f670ab0b378d", 00:15:34.210 "is_configured": true, 00:15:34.210 "data_offset": 2048, 00:15:34.210 "data_size": 63488 00:15:34.210 }, 00:15:34.210 { 00:15:34.210 "name": "BaseBdev4", 00:15:34.210 "uuid": "fbc5b63b-82b9-54f6-b48d-a0c1727287f7", 00:15:34.210 "is_configured": true, 00:15:34.210 "data_offset": 2048, 00:15:34.210 "data_size": 63488 00:15:34.210 } 00:15:34.210 ] 00:15:34.210 }' 00:15:34.210 03:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:34.210 03:18:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:34.778 03:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:34.778 03:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:34.778 03:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:34.778 03:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:34.778 03:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:34.778 03:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:34.778 03:18:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.778 03:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:34.778 03:18:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:34.778 03:18:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.778 03:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:34.778 "name": "raid_bdev1", 00:15:34.778 "uuid": "579c7d2b-6242-47a8-a86d-024c4f6b67da", 00:15:34.778 "strip_size_kb": 0, 00:15:34.778 "state": "online", 00:15:34.778 "raid_level": "raid1", 00:15:34.778 "superblock": true, 00:15:34.778 "num_base_bdevs": 4, 00:15:34.778 "num_base_bdevs_discovered": 2, 00:15:34.778 "num_base_bdevs_operational": 2, 00:15:34.778 "base_bdevs_list": [ 00:15:34.778 { 00:15:34.778 "name": null, 00:15:34.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.778 "is_configured": false, 00:15:34.778 "data_offset": 0, 00:15:34.778 "data_size": 63488 00:15:34.778 }, 00:15:34.778 { 00:15:34.778 "name": null, 00:15:34.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:34.778 "is_configured": false, 00:15:34.778 "data_offset": 2048, 00:15:34.778 "data_size": 63488 00:15:34.778 }, 00:15:34.778 { 00:15:34.778 "name": "BaseBdev3", 00:15:34.778 "uuid": "17fcbab6-826d-5b86-888c-f670ab0b378d", 00:15:34.778 "is_configured": true, 00:15:34.778 "data_offset": 2048, 00:15:34.778 "data_size": 63488 00:15:34.779 }, 00:15:34.779 { 00:15:34.779 "name": "BaseBdev4", 00:15:34.779 "uuid": "fbc5b63b-82b9-54f6-b48d-a0c1727287f7", 00:15:34.779 "is_configured": true, 00:15:34.779 "data_offset": 2048, 00:15:34.779 "data_size": 63488 00:15:34.779 } 00:15:34.779 ] 00:15:34.779 }' 00:15:34.779 03:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:34.779 03:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:34.779 03:18:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:34.779 03:18:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:34.779 03:18:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:34.779 03:18:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@650 -- # local es=0 00:15:34.779 03:18:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:34.779 03:18:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:15:34.779 03:18:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:34.779 03:18:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:15:34.779 03:18:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:34.779 03:18:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:34.779 03:18:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.779 03:18:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:34.779 [2024-10-09 03:18:18.047596] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:34.779 [2024-10-09 03:18:18.047874] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:15:34.779 [2024-10-09 03:18:18.047898] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:34.779 request: 00:15:34.779 { 00:15:34.779 "base_bdev": "BaseBdev1", 00:15:34.779 "raid_bdev": "raid_bdev1", 00:15:34.779 "method": "bdev_raid_add_base_bdev", 00:15:34.779 "req_id": 1 00:15:34.779 } 00:15:34.779 Got JSON-RPC error response 00:15:34.779 response: 00:15:34.779 { 00:15:34.779 "code": -22, 00:15:34.779 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:34.779 } 00:15:34.779 03:18:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:15:34.779 03:18:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@653 -- # es=1 00:15:34.779 03:18:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:34.779 03:18:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:34.779 03:18:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:34.779 03:18:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:36.158 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:36.158 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:36.158 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:36.158 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:36.158 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:36.158 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:36.158 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:36.158 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:36.158 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:36.158 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:36.158 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.158 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.158 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.158 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:36.158 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.158 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:36.158 "name": "raid_bdev1", 00:15:36.158 "uuid": "579c7d2b-6242-47a8-a86d-024c4f6b67da", 00:15:36.158 "strip_size_kb": 0, 00:15:36.158 "state": "online", 00:15:36.158 "raid_level": "raid1", 00:15:36.158 "superblock": true, 00:15:36.158 "num_base_bdevs": 4, 00:15:36.158 "num_base_bdevs_discovered": 2, 00:15:36.158 "num_base_bdevs_operational": 2, 00:15:36.158 "base_bdevs_list": [ 00:15:36.158 { 00:15:36.158 "name": null, 00:15:36.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.158 "is_configured": false, 00:15:36.158 "data_offset": 0, 00:15:36.158 "data_size": 63488 00:15:36.158 }, 00:15:36.158 { 00:15:36.158 "name": null, 00:15:36.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.158 "is_configured": false, 00:15:36.158 "data_offset": 2048, 00:15:36.158 "data_size": 63488 00:15:36.158 }, 00:15:36.158 { 00:15:36.158 "name": "BaseBdev3", 00:15:36.158 "uuid": "17fcbab6-826d-5b86-888c-f670ab0b378d", 00:15:36.158 "is_configured": true, 00:15:36.158 "data_offset": 2048, 00:15:36.158 "data_size": 63488 00:15:36.158 }, 00:15:36.158 { 00:15:36.158 "name": "BaseBdev4", 00:15:36.158 "uuid": "fbc5b63b-82b9-54f6-b48d-a0c1727287f7", 00:15:36.158 "is_configured": true, 00:15:36.158 "data_offset": 2048, 00:15:36.158 "data_size": 63488 00:15:36.158 } 00:15:36.158 ] 00:15:36.158 }' 00:15:36.158 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:36.158 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:36.418 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:36.418 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:36.418 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:36.418 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:36.418 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:36.418 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:36.418 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.418 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:36.418 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:36.418 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.418 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:36.418 "name": "raid_bdev1", 00:15:36.418 "uuid": "579c7d2b-6242-47a8-a86d-024c4f6b67da", 00:15:36.418 "strip_size_kb": 0, 00:15:36.418 "state": "online", 00:15:36.418 "raid_level": "raid1", 00:15:36.418 "superblock": true, 00:15:36.418 "num_base_bdevs": 4, 00:15:36.418 "num_base_bdevs_discovered": 2, 00:15:36.418 "num_base_bdevs_operational": 2, 00:15:36.418 "base_bdevs_list": [ 00:15:36.418 { 00:15:36.418 "name": null, 00:15:36.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.418 "is_configured": false, 00:15:36.418 "data_offset": 0, 00:15:36.418 "data_size": 63488 00:15:36.418 }, 00:15:36.418 { 00:15:36.418 "name": null, 00:15:36.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:36.418 "is_configured": false, 00:15:36.418 "data_offset": 2048, 00:15:36.418 "data_size": 63488 00:15:36.418 }, 00:15:36.418 { 00:15:36.418 "name": "BaseBdev3", 00:15:36.418 "uuid": "17fcbab6-826d-5b86-888c-f670ab0b378d", 00:15:36.418 "is_configured": true, 00:15:36.418 "data_offset": 2048, 00:15:36.418 "data_size": 63488 00:15:36.418 }, 00:15:36.418 { 00:15:36.418 "name": "BaseBdev4", 00:15:36.418 "uuid": "fbc5b63b-82b9-54f6-b48d-a0c1727287f7", 00:15:36.418 "is_configured": true, 00:15:36.418 "data_offset": 2048, 00:15:36.418 "data_size": 63488 00:15:36.418 } 00:15:36.418 ] 00:15:36.418 }' 00:15:36.418 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:36.418 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:36.418 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:36.418 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:36.418 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79406 00:15:36.418 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@950 -- # '[' -z 79406 ']' 00:15:36.418 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # kill -0 79406 00:15:36.418 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # uname 00:15:36.418 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:36.418 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 79406 00:15:36.677 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:36.677 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:36.677 killing process with pid 79406 00:15:36.677 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@968 -- # echo 'killing process with pid 79406' 00:15:36.677 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@969 -- # kill 79406 00:15:36.677 Received shutdown signal, test time was about 18.997196 seconds 00:15:36.677 00:15:36.677 Latency(us) 00:15:36.677 [2024-10-09T03:18:19.980Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:36.677 [2024-10-09T03:18:19.980Z] =================================================================================================================== 00:15:36.677 [2024-10-09T03:18:19.980Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:36.677 [2024-10-09 03:18:19.725537] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:36.677 03:18:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@974 -- # wait 79406 00:15:36.677 [2024-10-09 03:18:19.725709] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:36.677 [2024-10-09 03:18:19.725808] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:36.677 [2024-10-09 03:18:19.725825] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:36.937 [2024-10-09 03:18:20.167599] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:38.316 03:18:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:38.316 00:15:38.316 real 0m22.735s 00:15:38.316 user 0m29.162s 00:15:38.316 sys 0m2.711s 00:15:38.316 03:18:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:38.316 03:18:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:38.316 ************************************ 00:15:38.316 END TEST raid_rebuild_test_sb_io 00:15:38.316 ************************************ 00:15:38.575 03:18:21 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:15:38.575 03:18:21 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:15:38.575 03:18:21 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:38.575 03:18:21 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:38.575 03:18:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:38.575 ************************************ 00:15:38.575 START TEST raid5f_state_function_test 00:15:38.575 ************************************ 00:15:38.575 03:18:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 false 00:15:38.575 03:18:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:38.576 03:18:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:38.576 03:18:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:15:38.576 03:18:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:38.576 03:18:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:38.576 03:18:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:38.576 03:18:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:38.576 03:18:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:38.576 03:18:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:38.576 03:18:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:38.576 03:18:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:38.576 03:18:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:38.576 03:18:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:38.576 03:18:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:38.576 03:18:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:38.576 03:18:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:38.576 03:18:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:38.576 03:18:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:38.576 03:18:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:38.576 03:18:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:38.576 03:18:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:38.576 03:18:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:38.576 03:18:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:38.576 03:18:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:38.576 03:18:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:15:38.576 03:18:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:15:38.576 03:18:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80152 00:15:38.576 03:18:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:38.576 03:18:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80152' 00:15:38.576 Process raid pid: 80152 00:15:38.576 03:18:21 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80152 00:15:38.576 03:18:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 80152 ']' 00:15:38.576 03:18:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:38.576 03:18:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:38.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:38.576 03:18:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:38.576 03:18:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:38.576 03:18:21 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.576 [2024-10-09 03:18:21.741225] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:15:38.576 [2024-10-09 03:18:21.741345] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:38.835 [2024-10-09 03:18:21.907152] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:39.094 [2024-10-09 03:18:22.168373] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:39.354 [2024-10-09 03:18:22.431980] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:39.354 [2024-10-09 03:18:22.432033] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:39.354 03:18:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:39.354 03:18:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:15:39.354 03:18:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:39.354 03:18:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.354 03:18:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.354 [2024-10-09 03:18:22.613456] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:39.354 [2024-10-09 03:18:22.613522] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:39.354 [2024-10-09 03:18:22.613539] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:39.354 [2024-10-09 03:18:22.613550] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:39.354 [2024-10-09 03:18:22.613557] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:39.354 [2024-10-09 03:18:22.613568] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:39.354 03:18:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.354 03:18:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:39.354 03:18:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:39.354 03:18:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:39.354 03:18:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:39.354 03:18:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:39.354 03:18:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:39.354 03:18:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:39.354 03:18:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:39.354 03:18:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:39.354 03:18:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:39.354 03:18:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.354 03:18:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.354 03:18:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:39.354 03:18:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.354 03:18:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.613 03:18:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:39.613 "name": "Existed_Raid", 00:15:39.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.613 "strip_size_kb": 64, 00:15:39.613 "state": "configuring", 00:15:39.613 "raid_level": "raid5f", 00:15:39.613 "superblock": false, 00:15:39.613 "num_base_bdevs": 3, 00:15:39.613 "num_base_bdevs_discovered": 0, 00:15:39.613 "num_base_bdevs_operational": 3, 00:15:39.613 "base_bdevs_list": [ 00:15:39.613 { 00:15:39.613 "name": "BaseBdev1", 00:15:39.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.613 "is_configured": false, 00:15:39.613 "data_offset": 0, 00:15:39.613 "data_size": 0 00:15:39.613 }, 00:15:39.613 { 00:15:39.613 "name": "BaseBdev2", 00:15:39.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.613 "is_configured": false, 00:15:39.613 "data_offset": 0, 00:15:39.613 "data_size": 0 00:15:39.613 }, 00:15:39.613 { 00:15:39.613 "name": "BaseBdev3", 00:15:39.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.613 "is_configured": false, 00:15:39.613 "data_offset": 0, 00:15:39.613 "data_size": 0 00:15:39.613 } 00:15:39.613 ] 00:15:39.613 }' 00:15:39.613 03:18:22 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:39.613 03:18:22 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.873 03:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:39.873 03:18:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.873 03:18:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.873 [2024-10-09 03:18:23.089007] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:39.873 [2024-10-09 03:18:23.089068] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:39.873 03:18:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.873 03:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:39.873 03:18:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.873 03:18:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.873 [2024-10-09 03:18:23.101040] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:39.873 [2024-10-09 03:18:23.101100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:39.873 [2024-10-09 03:18:23.101111] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:39.873 [2024-10-09 03:18:23.101122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:39.873 [2024-10-09 03:18:23.101130] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:39.873 [2024-10-09 03:18:23.101140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:39.873 03:18:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.873 03:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:39.873 03:18:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.873 03:18:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.873 [2024-10-09 03:18:23.171744] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:39.873 BaseBdev1 00:15:39.873 03:18:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.873 03:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:39.873 03:18:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:39.873 03:18:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:39.873 03:18:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:39.873 03:18:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:39.873 03:18:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:39.873 03:18:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:39.873 03:18:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.873 03:18:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.132 03:18:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.132 03:18:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:40.132 03:18:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.132 03:18:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.132 [ 00:15:40.132 { 00:15:40.132 "name": "BaseBdev1", 00:15:40.132 "aliases": [ 00:15:40.132 "10bfea97-27fc-42e8-93e9-8586aa970830" 00:15:40.132 ], 00:15:40.132 "product_name": "Malloc disk", 00:15:40.132 "block_size": 512, 00:15:40.132 "num_blocks": 65536, 00:15:40.132 "uuid": "10bfea97-27fc-42e8-93e9-8586aa970830", 00:15:40.132 "assigned_rate_limits": { 00:15:40.132 "rw_ios_per_sec": 0, 00:15:40.132 "rw_mbytes_per_sec": 0, 00:15:40.132 "r_mbytes_per_sec": 0, 00:15:40.132 "w_mbytes_per_sec": 0 00:15:40.132 }, 00:15:40.132 "claimed": true, 00:15:40.132 "claim_type": "exclusive_write", 00:15:40.132 "zoned": false, 00:15:40.132 "supported_io_types": { 00:15:40.132 "read": true, 00:15:40.132 "write": true, 00:15:40.132 "unmap": true, 00:15:40.132 "flush": true, 00:15:40.132 "reset": true, 00:15:40.132 "nvme_admin": false, 00:15:40.132 "nvme_io": false, 00:15:40.132 "nvme_io_md": false, 00:15:40.132 "write_zeroes": true, 00:15:40.132 "zcopy": true, 00:15:40.132 "get_zone_info": false, 00:15:40.132 "zone_management": false, 00:15:40.132 "zone_append": false, 00:15:40.132 "compare": false, 00:15:40.132 "compare_and_write": false, 00:15:40.132 "abort": true, 00:15:40.132 "seek_hole": false, 00:15:40.132 "seek_data": false, 00:15:40.132 "copy": true, 00:15:40.132 "nvme_iov_md": false 00:15:40.132 }, 00:15:40.132 "memory_domains": [ 00:15:40.132 { 00:15:40.132 "dma_device_id": "system", 00:15:40.132 "dma_device_type": 1 00:15:40.132 }, 00:15:40.132 { 00:15:40.132 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.132 "dma_device_type": 2 00:15:40.133 } 00:15:40.133 ], 00:15:40.133 "driver_specific": {} 00:15:40.133 } 00:15:40.133 ] 00:15:40.133 03:18:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.133 03:18:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:40.133 03:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:40.133 03:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.133 03:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:40.133 03:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.133 03:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.133 03:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:40.133 03:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.133 03:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.133 03:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.133 03:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.133 03:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.133 03:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.133 03:18:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.133 03:18:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.133 03:18:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.133 03:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.133 "name": "Existed_Raid", 00:15:40.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.133 "strip_size_kb": 64, 00:15:40.133 "state": "configuring", 00:15:40.133 "raid_level": "raid5f", 00:15:40.133 "superblock": false, 00:15:40.133 "num_base_bdevs": 3, 00:15:40.133 "num_base_bdevs_discovered": 1, 00:15:40.133 "num_base_bdevs_operational": 3, 00:15:40.133 "base_bdevs_list": [ 00:15:40.133 { 00:15:40.133 "name": "BaseBdev1", 00:15:40.133 "uuid": "10bfea97-27fc-42e8-93e9-8586aa970830", 00:15:40.133 "is_configured": true, 00:15:40.133 "data_offset": 0, 00:15:40.133 "data_size": 65536 00:15:40.133 }, 00:15:40.133 { 00:15:40.133 "name": "BaseBdev2", 00:15:40.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.133 "is_configured": false, 00:15:40.133 "data_offset": 0, 00:15:40.133 "data_size": 0 00:15:40.133 }, 00:15:40.133 { 00:15:40.133 "name": "BaseBdev3", 00:15:40.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.133 "is_configured": false, 00:15:40.133 "data_offset": 0, 00:15:40.133 "data_size": 0 00:15:40.133 } 00:15:40.133 ] 00:15:40.133 }' 00:15:40.133 03:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.133 03:18:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.392 03:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:40.392 03:18:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.392 03:18:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.392 [2024-10-09 03:18:23.682933] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:40.392 [2024-10-09 03:18:23.683014] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:40.392 03:18:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.392 03:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:40.392 03:18:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.392 03:18:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.392 [2024-10-09 03:18:23.694943] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:40.651 [2024-10-09 03:18:23.697104] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:40.651 [2024-10-09 03:18:23.697151] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:40.652 [2024-10-09 03:18:23.697161] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:40.652 [2024-10-09 03:18:23.697170] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:40.652 03:18:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.652 03:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:40.652 03:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:40.652 03:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:40.652 03:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:40.652 03:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:40.652 03:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:40.652 03:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:40.652 03:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:40.652 03:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:40.652 03:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:40.652 03:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:40.652 03:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:40.652 03:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.652 03:18:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.652 03:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.652 03:18:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.652 03:18:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.652 03:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:40.652 "name": "Existed_Raid", 00:15:40.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.652 "strip_size_kb": 64, 00:15:40.652 "state": "configuring", 00:15:40.652 "raid_level": "raid5f", 00:15:40.652 "superblock": false, 00:15:40.652 "num_base_bdevs": 3, 00:15:40.652 "num_base_bdevs_discovered": 1, 00:15:40.652 "num_base_bdevs_operational": 3, 00:15:40.652 "base_bdevs_list": [ 00:15:40.652 { 00:15:40.652 "name": "BaseBdev1", 00:15:40.652 "uuid": "10bfea97-27fc-42e8-93e9-8586aa970830", 00:15:40.652 "is_configured": true, 00:15:40.652 "data_offset": 0, 00:15:40.652 "data_size": 65536 00:15:40.652 }, 00:15:40.652 { 00:15:40.652 "name": "BaseBdev2", 00:15:40.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.652 "is_configured": false, 00:15:40.652 "data_offset": 0, 00:15:40.652 "data_size": 0 00:15:40.652 }, 00:15:40.652 { 00:15:40.652 "name": "BaseBdev3", 00:15:40.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.652 "is_configured": false, 00:15:40.652 "data_offset": 0, 00:15:40.652 "data_size": 0 00:15:40.652 } 00:15:40.652 ] 00:15:40.652 }' 00:15:40.652 03:18:23 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:40.652 03:18:23 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.911 03:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:40.911 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.911 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.911 [2024-10-09 03:18:24.196786] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:40.911 BaseBdev2 00:15:40.911 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.911 03:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:40.911 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:40.911 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:40.911 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:40.911 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:40.911 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:40.911 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:40.911 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.911 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.911 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.911 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:40.911 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.911 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.171 [ 00:15:41.171 { 00:15:41.171 "name": "BaseBdev2", 00:15:41.171 "aliases": [ 00:15:41.171 "422d6325-d633-4972-a0b6-908eeaefddb9" 00:15:41.171 ], 00:15:41.171 "product_name": "Malloc disk", 00:15:41.171 "block_size": 512, 00:15:41.171 "num_blocks": 65536, 00:15:41.171 "uuid": "422d6325-d633-4972-a0b6-908eeaefddb9", 00:15:41.171 "assigned_rate_limits": { 00:15:41.171 "rw_ios_per_sec": 0, 00:15:41.171 "rw_mbytes_per_sec": 0, 00:15:41.171 "r_mbytes_per_sec": 0, 00:15:41.171 "w_mbytes_per_sec": 0 00:15:41.171 }, 00:15:41.171 "claimed": true, 00:15:41.171 "claim_type": "exclusive_write", 00:15:41.171 "zoned": false, 00:15:41.171 "supported_io_types": { 00:15:41.171 "read": true, 00:15:41.171 "write": true, 00:15:41.171 "unmap": true, 00:15:41.171 "flush": true, 00:15:41.171 "reset": true, 00:15:41.171 "nvme_admin": false, 00:15:41.171 "nvme_io": false, 00:15:41.171 "nvme_io_md": false, 00:15:41.171 "write_zeroes": true, 00:15:41.171 "zcopy": true, 00:15:41.171 "get_zone_info": false, 00:15:41.171 "zone_management": false, 00:15:41.171 "zone_append": false, 00:15:41.171 "compare": false, 00:15:41.171 "compare_and_write": false, 00:15:41.171 "abort": true, 00:15:41.171 "seek_hole": false, 00:15:41.171 "seek_data": false, 00:15:41.171 "copy": true, 00:15:41.171 "nvme_iov_md": false 00:15:41.171 }, 00:15:41.171 "memory_domains": [ 00:15:41.171 { 00:15:41.171 "dma_device_id": "system", 00:15:41.171 "dma_device_type": 1 00:15:41.171 }, 00:15:41.171 { 00:15:41.171 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.171 "dma_device_type": 2 00:15:41.171 } 00:15:41.171 ], 00:15:41.171 "driver_specific": {} 00:15:41.171 } 00:15:41.171 ] 00:15:41.171 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.171 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:41.171 03:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:41.171 03:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:41.171 03:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:41.171 03:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.171 03:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:41.171 03:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.171 03:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.171 03:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:41.171 03:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.171 03:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.171 03:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.171 03:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.171 03:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.171 03:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.171 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.171 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.171 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.171 03:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.171 "name": "Existed_Raid", 00:15:41.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.171 "strip_size_kb": 64, 00:15:41.171 "state": "configuring", 00:15:41.171 "raid_level": "raid5f", 00:15:41.171 "superblock": false, 00:15:41.171 "num_base_bdevs": 3, 00:15:41.171 "num_base_bdevs_discovered": 2, 00:15:41.171 "num_base_bdevs_operational": 3, 00:15:41.171 "base_bdevs_list": [ 00:15:41.171 { 00:15:41.171 "name": "BaseBdev1", 00:15:41.171 "uuid": "10bfea97-27fc-42e8-93e9-8586aa970830", 00:15:41.171 "is_configured": true, 00:15:41.171 "data_offset": 0, 00:15:41.171 "data_size": 65536 00:15:41.171 }, 00:15:41.171 { 00:15:41.171 "name": "BaseBdev2", 00:15:41.171 "uuid": "422d6325-d633-4972-a0b6-908eeaefddb9", 00:15:41.171 "is_configured": true, 00:15:41.171 "data_offset": 0, 00:15:41.171 "data_size": 65536 00:15:41.171 }, 00:15:41.171 { 00:15:41.171 "name": "BaseBdev3", 00:15:41.171 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.171 "is_configured": false, 00:15:41.171 "data_offset": 0, 00:15:41.171 "data_size": 0 00:15:41.171 } 00:15:41.171 ] 00:15:41.171 }' 00:15:41.171 03:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.171 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.739 [2024-10-09 03:18:24.793979] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:41.739 [2024-10-09 03:18:24.794073] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:41.739 [2024-10-09 03:18:24.794090] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:41.739 [2024-10-09 03:18:24.794433] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:41.739 [2024-10-09 03:18:24.800527] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:41.739 [2024-10-09 03:18:24.800556] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:41.739 [2024-10-09 03:18:24.800893] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:41.739 BaseBdev3 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.739 [ 00:15:41.739 { 00:15:41.739 "name": "BaseBdev3", 00:15:41.739 "aliases": [ 00:15:41.739 "ba2e425a-097a-48a2-8c59-714b7e575aeb" 00:15:41.739 ], 00:15:41.739 "product_name": "Malloc disk", 00:15:41.739 "block_size": 512, 00:15:41.739 "num_blocks": 65536, 00:15:41.739 "uuid": "ba2e425a-097a-48a2-8c59-714b7e575aeb", 00:15:41.739 "assigned_rate_limits": { 00:15:41.739 "rw_ios_per_sec": 0, 00:15:41.739 "rw_mbytes_per_sec": 0, 00:15:41.739 "r_mbytes_per_sec": 0, 00:15:41.739 "w_mbytes_per_sec": 0 00:15:41.739 }, 00:15:41.739 "claimed": true, 00:15:41.739 "claim_type": "exclusive_write", 00:15:41.739 "zoned": false, 00:15:41.739 "supported_io_types": { 00:15:41.739 "read": true, 00:15:41.739 "write": true, 00:15:41.739 "unmap": true, 00:15:41.739 "flush": true, 00:15:41.739 "reset": true, 00:15:41.739 "nvme_admin": false, 00:15:41.739 "nvme_io": false, 00:15:41.739 "nvme_io_md": false, 00:15:41.739 "write_zeroes": true, 00:15:41.739 "zcopy": true, 00:15:41.739 "get_zone_info": false, 00:15:41.739 "zone_management": false, 00:15:41.739 "zone_append": false, 00:15:41.739 "compare": false, 00:15:41.739 "compare_and_write": false, 00:15:41.739 "abort": true, 00:15:41.739 "seek_hole": false, 00:15:41.739 "seek_data": false, 00:15:41.739 "copy": true, 00:15:41.739 "nvme_iov_md": false 00:15:41.739 }, 00:15:41.739 "memory_domains": [ 00:15:41.739 { 00:15:41.739 "dma_device_id": "system", 00:15:41.739 "dma_device_type": 1 00:15:41.739 }, 00:15:41.739 { 00:15:41.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:41.739 "dma_device_type": 2 00:15:41.739 } 00:15:41.739 ], 00:15:41.739 "driver_specific": {} 00:15:41.739 } 00:15:41.739 ] 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:41.739 "name": "Existed_Raid", 00:15:41.739 "uuid": "ad746692-02ae-4b88-ba57-5ba319bb1238", 00:15:41.739 "strip_size_kb": 64, 00:15:41.739 "state": "online", 00:15:41.739 "raid_level": "raid5f", 00:15:41.739 "superblock": false, 00:15:41.739 "num_base_bdevs": 3, 00:15:41.739 "num_base_bdevs_discovered": 3, 00:15:41.739 "num_base_bdevs_operational": 3, 00:15:41.739 "base_bdevs_list": [ 00:15:41.739 { 00:15:41.739 "name": "BaseBdev1", 00:15:41.739 "uuid": "10bfea97-27fc-42e8-93e9-8586aa970830", 00:15:41.739 "is_configured": true, 00:15:41.739 "data_offset": 0, 00:15:41.739 "data_size": 65536 00:15:41.739 }, 00:15:41.739 { 00:15:41.739 "name": "BaseBdev2", 00:15:41.739 "uuid": "422d6325-d633-4972-a0b6-908eeaefddb9", 00:15:41.739 "is_configured": true, 00:15:41.739 "data_offset": 0, 00:15:41.739 "data_size": 65536 00:15:41.739 }, 00:15:41.739 { 00:15:41.739 "name": "BaseBdev3", 00:15:41.739 "uuid": "ba2e425a-097a-48a2-8c59-714b7e575aeb", 00:15:41.739 "is_configured": true, 00:15:41.739 "data_offset": 0, 00:15:41.739 "data_size": 65536 00:15:41.739 } 00:15:41.739 ] 00:15:41.739 }' 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:41.739 03:18:24 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.308 [2024-10-09 03:18:25.332176] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:42.308 "name": "Existed_Raid", 00:15:42.308 "aliases": [ 00:15:42.308 "ad746692-02ae-4b88-ba57-5ba319bb1238" 00:15:42.308 ], 00:15:42.308 "product_name": "Raid Volume", 00:15:42.308 "block_size": 512, 00:15:42.308 "num_blocks": 131072, 00:15:42.308 "uuid": "ad746692-02ae-4b88-ba57-5ba319bb1238", 00:15:42.308 "assigned_rate_limits": { 00:15:42.308 "rw_ios_per_sec": 0, 00:15:42.308 "rw_mbytes_per_sec": 0, 00:15:42.308 "r_mbytes_per_sec": 0, 00:15:42.308 "w_mbytes_per_sec": 0 00:15:42.308 }, 00:15:42.308 "claimed": false, 00:15:42.308 "zoned": false, 00:15:42.308 "supported_io_types": { 00:15:42.308 "read": true, 00:15:42.308 "write": true, 00:15:42.308 "unmap": false, 00:15:42.308 "flush": false, 00:15:42.308 "reset": true, 00:15:42.308 "nvme_admin": false, 00:15:42.308 "nvme_io": false, 00:15:42.308 "nvme_io_md": false, 00:15:42.308 "write_zeroes": true, 00:15:42.308 "zcopy": false, 00:15:42.308 "get_zone_info": false, 00:15:42.308 "zone_management": false, 00:15:42.308 "zone_append": false, 00:15:42.308 "compare": false, 00:15:42.308 "compare_and_write": false, 00:15:42.308 "abort": false, 00:15:42.308 "seek_hole": false, 00:15:42.308 "seek_data": false, 00:15:42.308 "copy": false, 00:15:42.308 "nvme_iov_md": false 00:15:42.308 }, 00:15:42.308 "driver_specific": { 00:15:42.308 "raid": { 00:15:42.308 "uuid": "ad746692-02ae-4b88-ba57-5ba319bb1238", 00:15:42.308 "strip_size_kb": 64, 00:15:42.308 "state": "online", 00:15:42.308 "raid_level": "raid5f", 00:15:42.308 "superblock": false, 00:15:42.308 "num_base_bdevs": 3, 00:15:42.308 "num_base_bdevs_discovered": 3, 00:15:42.308 "num_base_bdevs_operational": 3, 00:15:42.308 "base_bdevs_list": [ 00:15:42.308 { 00:15:42.308 "name": "BaseBdev1", 00:15:42.308 "uuid": "10bfea97-27fc-42e8-93e9-8586aa970830", 00:15:42.308 "is_configured": true, 00:15:42.308 "data_offset": 0, 00:15:42.308 "data_size": 65536 00:15:42.308 }, 00:15:42.308 { 00:15:42.308 "name": "BaseBdev2", 00:15:42.308 "uuid": "422d6325-d633-4972-a0b6-908eeaefddb9", 00:15:42.308 "is_configured": true, 00:15:42.308 "data_offset": 0, 00:15:42.308 "data_size": 65536 00:15:42.308 }, 00:15:42.308 { 00:15:42.308 "name": "BaseBdev3", 00:15:42.308 "uuid": "ba2e425a-097a-48a2-8c59-714b7e575aeb", 00:15:42.308 "is_configured": true, 00:15:42.308 "data_offset": 0, 00:15:42.308 "data_size": 65536 00:15:42.308 } 00:15:42.308 ] 00:15:42.308 } 00:15:42.308 } 00:15:42.308 }' 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:42.308 BaseBdev2 00:15:42.308 BaseBdev3' 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.308 03:18:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.568 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:42.568 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:42.568 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:42.568 03:18:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.568 03:18:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.568 [2024-10-09 03:18:25.619423] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:42.568 03:18:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.568 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:42.568 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:42.568 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:42.568 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:15:42.568 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:42.568 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:42.568 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:42.568 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:42.568 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:42.568 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:42.568 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:42.568 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:42.568 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:42.568 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:42.568 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:42.568 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.568 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.568 03:18:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.568 03:18:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.568 03:18:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.568 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:42.568 "name": "Existed_Raid", 00:15:42.568 "uuid": "ad746692-02ae-4b88-ba57-5ba319bb1238", 00:15:42.568 "strip_size_kb": 64, 00:15:42.568 "state": "online", 00:15:42.568 "raid_level": "raid5f", 00:15:42.568 "superblock": false, 00:15:42.568 "num_base_bdevs": 3, 00:15:42.568 "num_base_bdevs_discovered": 2, 00:15:42.568 "num_base_bdevs_operational": 2, 00:15:42.568 "base_bdevs_list": [ 00:15:42.568 { 00:15:42.568 "name": null, 00:15:42.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.568 "is_configured": false, 00:15:42.568 "data_offset": 0, 00:15:42.568 "data_size": 65536 00:15:42.568 }, 00:15:42.568 { 00:15:42.568 "name": "BaseBdev2", 00:15:42.568 "uuid": "422d6325-d633-4972-a0b6-908eeaefddb9", 00:15:42.568 "is_configured": true, 00:15:42.568 "data_offset": 0, 00:15:42.568 "data_size": 65536 00:15:42.568 }, 00:15:42.568 { 00:15:42.568 "name": "BaseBdev3", 00:15:42.568 "uuid": "ba2e425a-097a-48a2-8c59-714b7e575aeb", 00:15:42.568 "is_configured": true, 00:15:42.568 "data_offset": 0, 00:15:42.568 "data_size": 65536 00:15:42.568 } 00:15:42.568 ] 00:15:42.568 }' 00:15:42.568 03:18:25 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:42.568 03:18:25 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.135 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:43.135 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:43.135 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.135 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:43.135 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.135 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.135 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.135 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:43.135 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:43.135 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:43.135 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.135 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.135 [2024-10-09 03:18:26.201783] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:43.135 [2024-10-09 03:18:26.201920] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:43.135 [2024-10-09 03:18:26.304205] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:43.135 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.135 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:43.135 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:43.135 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.135 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:43.135 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.135 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.135 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.135 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:43.135 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:43.135 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:43.135 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.135 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.135 [2024-10-09 03:18:26.360108] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:43.135 [2024-10-09 03:18:26.360164] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:43.395 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.395 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:43.395 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:43.395 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.395 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.395 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.395 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:43.395 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.395 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:43.395 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:43.395 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:43.395 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:43.395 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:43.395 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:43.395 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.395 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.395 BaseBdev2 00:15:43.395 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.395 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:43.395 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:43.395 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:43.395 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:43.395 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:43.395 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:43.395 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:43.395 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.395 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.395 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.395 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:43.395 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.395 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.395 [ 00:15:43.395 { 00:15:43.395 "name": "BaseBdev2", 00:15:43.395 "aliases": [ 00:15:43.395 "4a10b2b5-ffe4-4e9a-a8df-012c2c4ca0e8" 00:15:43.395 ], 00:15:43.395 "product_name": "Malloc disk", 00:15:43.395 "block_size": 512, 00:15:43.395 "num_blocks": 65536, 00:15:43.395 "uuid": "4a10b2b5-ffe4-4e9a-a8df-012c2c4ca0e8", 00:15:43.395 "assigned_rate_limits": { 00:15:43.395 "rw_ios_per_sec": 0, 00:15:43.395 "rw_mbytes_per_sec": 0, 00:15:43.395 "r_mbytes_per_sec": 0, 00:15:43.395 "w_mbytes_per_sec": 0 00:15:43.395 }, 00:15:43.395 "claimed": false, 00:15:43.395 "zoned": false, 00:15:43.395 "supported_io_types": { 00:15:43.395 "read": true, 00:15:43.395 "write": true, 00:15:43.395 "unmap": true, 00:15:43.395 "flush": true, 00:15:43.395 "reset": true, 00:15:43.395 "nvme_admin": false, 00:15:43.395 "nvme_io": false, 00:15:43.395 "nvme_io_md": false, 00:15:43.395 "write_zeroes": true, 00:15:43.395 "zcopy": true, 00:15:43.396 "get_zone_info": false, 00:15:43.396 "zone_management": false, 00:15:43.396 "zone_append": false, 00:15:43.396 "compare": false, 00:15:43.396 "compare_and_write": false, 00:15:43.396 "abort": true, 00:15:43.396 "seek_hole": false, 00:15:43.396 "seek_data": false, 00:15:43.396 "copy": true, 00:15:43.396 "nvme_iov_md": false 00:15:43.396 }, 00:15:43.396 "memory_domains": [ 00:15:43.396 { 00:15:43.396 "dma_device_id": "system", 00:15:43.396 "dma_device_type": 1 00:15:43.396 }, 00:15:43.396 { 00:15:43.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.396 "dma_device_type": 2 00:15:43.396 } 00:15:43.396 ], 00:15:43.396 "driver_specific": {} 00:15:43.396 } 00:15:43.396 ] 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.396 BaseBdev3 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.396 [ 00:15:43.396 { 00:15:43.396 "name": "BaseBdev3", 00:15:43.396 "aliases": [ 00:15:43.396 "2e91edad-0856-452a-930b-561f713f9c64" 00:15:43.396 ], 00:15:43.396 "product_name": "Malloc disk", 00:15:43.396 "block_size": 512, 00:15:43.396 "num_blocks": 65536, 00:15:43.396 "uuid": "2e91edad-0856-452a-930b-561f713f9c64", 00:15:43.396 "assigned_rate_limits": { 00:15:43.396 "rw_ios_per_sec": 0, 00:15:43.396 "rw_mbytes_per_sec": 0, 00:15:43.396 "r_mbytes_per_sec": 0, 00:15:43.396 "w_mbytes_per_sec": 0 00:15:43.396 }, 00:15:43.396 "claimed": false, 00:15:43.396 "zoned": false, 00:15:43.396 "supported_io_types": { 00:15:43.396 "read": true, 00:15:43.396 "write": true, 00:15:43.396 "unmap": true, 00:15:43.396 "flush": true, 00:15:43.396 "reset": true, 00:15:43.396 "nvme_admin": false, 00:15:43.396 "nvme_io": false, 00:15:43.396 "nvme_io_md": false, 00:15:43.396 "write_zeroes": true, 00:15:43.396 "zcopy": true, 00:15:43.396 "get_zone_info": false, 00:15:43.396 "zone_management": false, 00:15:43.396 "zone_append": false, 00:15:43.396 "compare": false, 00:15:43.396 "compare_and_write": false, 00:15:43.396 "abort": true, 00:15:43.396 "seek_hole": false, 00:15:43.396 "seek_data": false, 00:15:43.396 "copy": true, 00:15:43.396 "nvme_iov_md": false 00:15:43.396 }, 00:15:43.396 "memory_domains": [ 00:15:43.396 { 00:15:43.396 "dma_device_id": "system", 00:15:43.396 "dma_device_type": 1 00:15:43.396 }, 00:15:43.396 { 00:15:43.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:43.396 "dma_device_type": 2 00:15:43.396 } 00:15:43.396 ], 00:15:43.396 "driver_specific": {} 00:15:43.396 } 00:15:43.396 ] 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.396 [2024-10-09 03:18:26.674113] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:43.396 [2024-10-09 03:18:26.674164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:43.396 [2024-10-09 03:18:26.674186] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:43.396 [2024-10-09 03:18:26.676196] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.396 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.657 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.657 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.657 "name": "Existed_Raid", 00:15:43.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.657 "strip_size_kb": 64, 00:15:43.657 "state": "configuring", 00:15:43.657 "raid_level": "raid5f", 00:15:43.657 "superblock": false, 00:15:43.657 "num_base_bdevs": 3, 00:15:43.657 "num_base_bdevs_discovered": 2, 00:15:43.657 "num_base_bdevs_operational": 3, 00:15:43.657 "base_bdevs_list": [ 00:15:43.657 { 00:15:43.657 "name": "BaseBdev1", 00:15:43.657 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.657 "is_configured": false, 00:15:43.657 "data_offset": 0, 00:15:43.657 "data_size": 0 00:15:43.657 }, 00:15:43.657 { 00:15:43.657 "name": "BaseBdev2", 00:15:43.657 "uuid": "4a10b2b5-ffe4-4e9a-a8df-012c2c4ca0e8", 00:15:43.657 "is_configured": true, 00:15:43.657 "data_offset": 0, 00:15:43.657 "data_size": 65536 00:15:43.657 }, 00:15:43.657 { 00:15:43.657 "name": "BaseBdev3", 00:15:43.657 "uuid": "2e91edad-0856-452a-930b-561f713f9c64", 00:15:43.657 "is_configured": true, 00:15:43.657 "data_offset": 0, 00:15:43.657 "data_size": 65536 00:15:43.657 } 00:15:43.657 ] 00:15:43.657 }' 00:15:43.657 03:18:26 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.657 03:18:26 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.925 03:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:43.925 03:18:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.925 03:18:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.925 [2024-10-09 03:18:27.121325] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:43.925 03:18:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.925 03:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:43.925 03:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:43.925 03:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:43.925 03:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:43.925 03:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:43.925 03:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:43.925 03:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.925 03:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.925 03:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.925 03:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.925 03:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.925 03:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.925 03:18:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.925 03:18:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.925 03:18:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.925 03:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.925 "name": "Existed_Raid", 00:15:43.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.925 "strip_size_kb": 64, 00:15:43.925 "state": "configuring", 00:15:43.925 "raid_level": "raid5f", 00:15:43.925 "superblock": false, 00:15:43.925 "num_base_bdevs": 3, 00:15:43.925 "num_base_bdevs_discovered": 1, 00:15:43.925 "num_base_bdevs_operational": 3, 00:15:43.925 "base_bdevs_list": [ 00:15:43.925 { 00:15:43.925 "name": "BaseBdev1", 00:15:43.925 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.925 "is_configured": false, 00:15:43.925 "data_offset": 0, 00:15:43.925 "data_size": 0 00:15:43.925 }, 00:15:43.925 { 00:15:43.925 "name": null, 00:15:43.925 "uuid": "4a10b2b5-ffe4-4e9a-a8df-012c2c4ca0e8", 00:15:43.925 "is_configured": false, 00:15:43.925 "data_offset": 0, 00:15:43.925 "data_size": 65536 00:15:43.925 }, 00:15:43.925 { 00:15:43.925 "name": "BaseBdev3", 00:15:43.925 "uuid": "2e91edad-0856-452a-930b-561f713f9c64", 00:15:43.925 "is_configured": true, 00:15:43.925 "data_offset": 0, 00:15:43.925 "data_size": 65536 00:15:43.925 } 00:15:43.925 ] 00:15:43.925 }' 00:15:43.925 03:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.925 03:18:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.250 03:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.250 03:18:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.250 03:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:44.250 03:18:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.509 03:18:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.509 03:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:44.509 03:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:44.509 03:18:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.509 03:18:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.509 [2024-10-09 03:18:27.635155] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:44.509 BaseBdev1 00:15:44.510 03:18:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.510 03:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:44.510 03:18:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:44.510 03:18:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:44.510 03:18:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:44.510 03:18:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:44.510 03:18:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:44.510 03:18:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:44.510 03:18:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.510 03:18:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.510 03:18:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.510 03:18:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:44.510 03:18:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.510 03:18:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.510 [ 00:15:44.510 { 00:15:44.510 "name": "BaseBdev1", 00:15:44.510 "aliases": [ 00:15:44.510 "aad8b33b-8345-4f9b-8947-520b8102d4b5" 00:15:44.510 ], 00:15:44.510 "product_name": "Malloc disk", 00:15:44.510 "block_size": 512, 00:15:44.510 "num_blocks": 65536, 00:15:44.510 "uuid": "aad8b33b-8345-4f9b-8947-520b8102d4b5", 00:15:44.510 "assigned_rate_limits": { 00:15:44.510 "rw_ios_per_sec": 0, 00:15:44.510 "rw_mbytes_per_sec": 0, 00:15:44.510 "r_mbytes_per_sec": 0, 00:15:44.510 "w_mbytes_per_sec": 0 00:15:44.510 }, 00:15:44.510 "claimed": true, 00:15:44.510 "claim_type": "exclusive_write", 00:15:44.510 "zoned": false, 00:15:44.510 "supported_io_types": { 00:15:44.510 "read": true, 00:15:44.510 "write": true, 00:15:44.510 "unmap": true, 00:15:44.510 "flush": true, 00:15:44.510 "reset": true, 00:15:44.510 "nvme_admin": false, 00:15:44.510 "nvme_io": false, 00:15:44.510 "nvme_io_md": false, 00:15:44.510 "write_zeroes": true, 00:15:44.510 "zcopy": true, 00:15:44.510 "get_zone_info": false, 00:15:44.510 "zone_management": false, 00:15:44.510 "zone_append": false, 00:15:44.510 "compare": false, 00:15:44.510 "compare_and_write": false, 00:15:44.510 "abort": true, 00:15:44.510 "seek_hole": false, 00:15:44.510 "seek_data": false, 00:15:44.510 "copy": true, 00:15:44.510 "nvme_iov_md": false 00:15:44.510 }, 00:15:44.510 "memory_domains": [ 00:15:44.510 { 00:15:44.510 "dma_device_id": "system", 00:15:44.510 "dma_device_type": 1 00:15:44.510 }, 00:15:44.510 { 00:15:44.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:44.510 "dma_device_type": 2 00:15:44.510 } 00:15:44.510 ], 00:15:44.510 "driver_specific": {} 00:15:44.510 } 00:15:44.510 ] 00:15:44.510 03:18:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.510 03:18:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:44.510 03:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:44.510 03:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:44.510 03:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:44.510 03:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:44.510 03:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:44.510 03:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:44.510 03:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:44.510 03:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:44.510 03:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:44.510 03:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:44.510 03:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.510 03:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:44.510 03:18:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.510 03:18:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.510 03:18:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.510 03:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:44.510 "name": "Existed_Raid", 00:15:44.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.510 "strip_size_kb": 64, 00:15:44.510 "state": "configuring", 00:15:44.510 "raid_level": "raid5f", 00:15:44.510 "superblock": false, 00:15:44.510 "num_base_bdevs": 3, 00:15:44.510 "num_base_bdevs_discovered": 2, 00:15:44.510 "num_base_bdevs_operational": 3, 00:15:44.510 "base_bdevs_list": [ 00:15:44.510 { 00:15:44.510 "name": "BaseBdev1", 00:15:44.510 "uuid": "aad8b33b-8345-4f9b-8947-520b8102d4b5", 00:15:44.510 "is_configured": true, 00:15:44.510 "data_offset": 0, 00:15:44.510 "data_size": 65536 00:15:44.510 }, 00:15:44.510 { 00:15:44.510 "name": null, 00:15:44.510 "uuid": "4a10b2b5-ffe4-4e9a-a8df-012c2c4ca0e8", 00:15:44.510 "is_configured": false, 00:15:44.510 "data_offset": 0, 00:15:44.510 "data_size": 65536 00:15:44.510 }, 00:15:44.510 { 00:15:44.510 "name": "BaseBdev3", 00:15:44.510 "uuid": "2e91edad-0856-452a-930b-561f713f9c64", 00:15:44.510 "is_configured": true, 00:15:44.510 "data_offset": 0, 00:15:44.510 "data_size": 65536 00:15:44.510 } 00:15:44.510 ] 00:15:44.510 }' 00:15:44.510 03:18:27 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:44.510 03:18:27 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.079 03:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:45.079 03:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.079 03:18:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.079 03:18:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.079 03:18:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.079 03:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:45.079 03:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:45.079 03:18:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.079 03:18:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.079 [2024-10-09 03:18:28.110399] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:45.079 03:18:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.079 03:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:45.079 03:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.079 03:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.079 03:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.079 03:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.079 03:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:45.079 03:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.079 03:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.079 03:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.079 03:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.079 03:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.079 03:18:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.079 03:18:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.079 03:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.079 03:18:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.079 03:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.079 "name": "Existed_Raid", 00:15:45.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.079 "strip_size_kb": 64, 00:15:45.079 "state": "configuring", 00:15:45.079 "raid_level": "raid5f", 00:15:45.079 "superblock": false, 00:15:45.079 "num_base_bdevs": 3, 00:15:45.079 "num_base_bdevs_discovered": 1, 00:15:45.079 "num_base_bdevs_operational": 3, 00:15:45.079 "base_bdevs_list": [ 00:15:45.079 { 00:15:45.079 "name": "BaseBdev1", 00:15:45.079 "uuid": "aad8b33b-8345-4f9b-8947-520b8102d4b5", 00:15:45.079 "is_configured": true, 00:15:45.079 "data_offset": 0, 00:15:45.079 "data_size": 65536 00:15:45.079 }, 00:15:45.079 { 00:15:45.079 "name": null, 00:15:45.079 "uuid": "4a10b2b5-ffe4-4e9a-a8df-012c2c4ca0e8", 00:15:45.079 "is_configured": false, 00:15:45.079 "data_offset": 0, 00:15:45.079 "data_size": 65536 00:15:45.079 }, 00:15:45.079 { 00:15:45.079 "name": null, 00:15:45.079 "uuid": "2e91edad-0856-452a-930b-561f713f9c64", 00:15:45.079 "is_configured": false, 00:15:45.079 "data_offset": 0, 00:15:45.079 "data_size": 65536 00:15:45.079 } 00:15:45.079 ] 00:15:45.079 }' 00:15:45.079 03:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.079 03:18:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.338 03:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:45.338 03:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.338 03:18:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.338 03:18:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.338 03:18:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.338 03:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:45.338 03:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:45.338 03:18:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.338 03:18:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.338 [2024-10-09 03:18:28.581672] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:45.338 03:18:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.338 03:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:45.338 03:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.338 03:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.338 03:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.338 03:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.338 03:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:45.338 03:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.338 03:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.338 03:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.338 03:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.338 03:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.338 03:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.338 03:18:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.338 03:18:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.338 03:18:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.338 03:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:45.338 "name": "Existed_Raid", 00:15:45.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:45.338 "strip_size_kb": 64, 00:15:45.338 "state": "configuring", 00:15:45.338 "raid_level": "raid5f", 00:15:45.338 "superblock": false, 00:15:45.338 "num_base_bdevs": 3, 00:15:45.338 "num_base_bdevs_discovered": 2, 00:15:45.338 "num_base_bdevs_operational": 3, 00:15:45.338 "base_bdevs_list": [ 00:15:45.338 { 00:15:45.338 "name": "BaseBdev1", 00:15:45.338 "uuid": "aad8b33b-8345-4f9b-8947-520b8102d4b5", 00:15:45.338 "is_configured": true, 00:15:45.338 "data_offset": 0, 00:15:45.338 "data_size": 65536 00:15:45.338 }, 00:15:45.338 { 00:15:45.338 "name": null, 00:15:45.338 "uuid": "4a10b2b5-ffe4-4e9a-a8df-012c2c4ca0e8", 00:15:45.338 "is_configured": false, 00:15:45.338 "data_offset": 0, 00:15:45.338 "data_size": 65536 00:15:45.338 }, 00:15:45.338 { 00:15:45.338 "name": "BaseBdev3", 00:15:45.338 "uuid": "2e91edad-0856-452a-930b-561f713f9c64", 00:15:45.338 "is_configured": true, 00:15:45.338 "data_offset": 0, 00:15:45.338 "data_size": 65536 00:15:45.338 } 00:15:45.338 ] 00:15:45.338 }' 00:15:45.338 03:18:28 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:45.338 03:18:28 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.907 03:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.907 03:18:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.907 03:18:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.907 03:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:45.907 03:18:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.907 03:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:45.907 03:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:45.907 03:18:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.907 03:18:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:45.907 [2024-10-09 03:18:29.080926] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:45.907 03:18:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.907 03:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:45.907 03:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:45.907 03:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:45.907 03:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:45.907 03:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:45.907 03:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:45.907 03:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:45.907 03:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:45.907 03:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:45.907 03:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:45.907 03:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:45.907 03:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:45.907 03:18:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.907 03:18:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.167 03:18:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.167 03:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.167 "name": "Existed_Raid", 00:15:46.167 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.167 "strip_size_kb": 64, 00:15:46.167 "state": "configuring", 00:15:46.167 "raid_level": "raid5f", 00:15:46.167 "superblock": false, 00:15:46.167 "num_base_bdevs": 3, 00:15:46.167 "num_base_bdevs_discovered": 1, 00:15:46.167 "num_base_bdevs_operational": 3, 00:15:46.167 "base_bdevs_list": [ 00:15:46.167 { 00:15:46.167 "name": null, 00:15:46.167 "uuid": "aad8b33b-8345-4f9b-8947-520b8102d4b5", 00:15:46.167 "is_configured": false, 00:15:46.167 "data_offset": 0, 00:15:46.167 "data_size": 65536 00:15:46.167 }, 00:15:46.167 { 00:15:46.167 "name": null, 00:15:46.167 "uuid": "4a10b2b5-ffe4-4e9a-a8df-012c2c4ca0e8", 00:15:46.167 "is_configured": false, 00:15:46.167 "data_offset": 0, 00:15:46.167 "data_size": 65536 00:15:46.167 }, 00:15:46.167 { 00:15:46.167 "name": "BaseBdev3", 00:15:46.167 "uuid": "2e91edad-0856-452a-930b-561f713f9c64", 00:15:46.167 "is_configured": true, 00:15:46.167 "data_offset": 0, 00:15:46.167 "data_size": 65536 00:15:46.167 } 00:15:46.167 ] 00:15:46.167 }' 00:15:46.167 03:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.167 03:18:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.426 03:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:46.426 03:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.426 03:18:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.426 03:18:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.426 03:18:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.426 03:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:46.426 03:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:46.426 03:18:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.426 03:18:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.426 [2024-10-09 03:18:29.662411] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:46.426 03:18:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.426 03:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:46.426 03:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.426 03:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:46.426 03:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.426 03:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.426 03:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:46.426 03:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.426 03:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.426 03:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.426 03:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.426 03:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.426 03:18:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.426 03:18:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.426 03:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.426 03:18:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.426 03:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:46.426 "name": "Existed_Raid", 00:15:46.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:46.426 "strip_size_kb": 64, 00:15:46.426 "state": "configuring", 00:15:46.426 "raid_level": "raid5f", 00:15:46.426 "superblock": false, 00:15:46.426 "num_base_bdevs": 3, 00:15:46.426 "num_base_bdevs_discovered": 2, 00:15:46.426 "num_base_bdevs_operational": 3, 00:15:46.426 "base_bdevs_list": [ 00:15:46.426 { 00:15:46.426 "name": null, 00:15:46.426 "uuid": "aad8b33b-8345-4f9b-8947-520b8102d4b5", 00:15:46.426 "is_configured": false, 00:15:46.426 "data_offset": 0, 00:15:46.426 "data_size": 65536 00:15:46.426 }, 00:15:46.426 { 00:15:46.426 "name": "BaseBdev2", 00:15:46.426 "uuid": "4a10b2b5-ffe4-4e9a-a8df-012c2c4ca0e8", 00:15:46.426 "is_configured": true, 00:15:46.426 "data_offset": 0, 00:15:46.426 "data_size": 65536 00:15:46.426 }, 00:15:46.426 { 00:15:46.426 "name": "BaseBdev3", 00:15:46.426 "uuid": "2e91edad-0856-452a-930b-561f713f9c64", 00:15:46.426 "is_configured": true, 00:15:46.426 "data_offset": 0, 00:15:46.426 "data_size": 65536 00:15:46.426 } 00:15:46.426 ] 00:15:46.426 }' 00:15:46.426 03:18:29 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:46.426 03:18:29 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.992 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.992 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:46.992 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.992 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u aad8b33b-8345-4f9b-8947-520b8102d4b5 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.993 [2024-10-09 03:18:30.213863] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:46.993 [2024-10-09 03:18:30.213920] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:46.993 [2024-10-09 03:18:30.213932] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:15:46.993 [2024-10-09 03:18:30.214206] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:46.993 [2024-10-09 03:18:30.219048] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:46.993 [2024-10-09 03:18:30.219071] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:46.993 [2024-10-09 03:18:30.219327] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:46.993 NewBaseBdev 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.993 [ 00:15:46.993 { 00:15:46.993 "name": "NewBaseBdev", 00:15:46.993 "aliases": [ 00:15:46.993 "aad8b33b-8345-4f9b-8947-520b8102d4b5" 00:15:46.993 ], 00:15:46.993 "product_name": "Malloc disk", 00:15:46.993 "block_size": 512, 00:15:46.993 "num_blocks": 65536, 00:15:46.993 "uuid": "aad8b33b-8345-4f9b-8947-520b8102d4b5", 00:15:46.993 "assigned_rate_limits": { 00:15:46.993 "rw_ios_per_sec": 0, 00:15:46.993 "rw_mbytes_per_sec": 0, 00:15:46.993 "r_mbytes_per_sec": 0, 00:15:46.993 "w_mbytes_per_sec": 0 00:15:46.993 }, 00:15:46.993 "claimed": true, 00:15:46.993 "claim_type": "exclusive_write", 00:15:46.993 "zoned": false, 00:15:46.993 "supported_io_types": { 00:15:46.993 "read": true, 00:15:46.993 "write": true, 00:15:46.993 "unmap": true, 00:15:46.993 "flush": true, 00:15:46.993 "reset": true, 00:15:46.993 "nvme_admin": false, 00:15:46.993 "nvme_io": false, 00:15:46.993 "nvme_io_md": false, 00:15:46.993 "write_zeroes": true, 00:15:46.993 "zcopy": true, 00:15:46.993 "get_zone_info": false, 00:15:46.993 "zone_management": false, 00:15:46.993 "zone_append": false, 00:15:46.993 "compare": false, 00:15:46.993 "compare_and_write": false, 00:15:46.993 "abort": true, 00:15:46.993 "seek_hole": false, 00:15:46.993 "seek_data": false, 00:15:46.993 "copy": true, 00:15:46.993 "nvme_iov_md": false 00:15:46.993 }, 00:15:46.993 "memory_domains": [ 00:15:46.993 { 00:15:46.993 "dma_device_id": "system", 00:15:46.993 "dma_device_type": 1 00:15:46.993 }, 00:15:46.993 { 00:15:46.993 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.993 "dma_device_type": 2 00:15:46.993 } 00:15:46.993 ], 00:15:46.993 "driver_specific": {} 00:15:46.993 } 00:15:46.993 ] 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.993 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.252 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:47.252 "name": "Existed_Raid", 00:15:47.252 "uuid": "471cacba-14a8-4c94-b5e3-1da5c19559da", 00:15:47.252 "strip_size_kb": 64, 00:15:47.252 "state": "online", 00:15:47.252 "raid_level": "raid5f", 00:15:47.252 "superblock": false, 00:15:47.252 "num_base_bdevs": 3, 00:15:47.252 "num_base_bdevs_discovered": 3, 00:15:47.252 "num_base_bdevs_operational": 3, 00:15:47.252 "base_bdevs_list": [ 00:15:47.252 { 00:15:47.252 "name": "NewBaseBdev", 00:15:47.252 "uuid": "aad8b33b-8345-4f9b-8947-520b8102d4b5", 00:15:47.252 "is_configured": true, 00:15:47.252 "data_offset": 0, 00:15:47.252 "data_size": 65536 00:15:47.252 }, 00:15:47.252 { 00:15:47.252 "name": "BaseBdev2", 00:15:47.252 "uuid": "4a10b2b5-ffe4-4e9a-a8df-012c2c4ca0e8", 00:15:47.252 "is_configured": true, 00:15:47.252 "data_offset": 0, 00:15:47.252 "data_size": 65536 00:15:47.252 }, 00:15:47.252 { 00:15:47.252 "name": "BaseBdev3", 00:15:47.252 "uuid": "2e91edad-0856-452a-930b-561f713f9c64", 00:15:47.252 "is_configured": true, 00:15:47.252 "data_offset": 0, 00:15:47.252 "data_size": 65536 00:15:47.252 } 00:15:47.252 ] 00:15:47.252 }' 00:15:47.252 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:47.252 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.510 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:47.510 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:47.510 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:47.510 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:47.510 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:15:47.510 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:47.510 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:47.510 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.510 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.510 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:47.510 [2024-10-09 03:18:30.685664] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:47.510 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.510 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:47.510 "name": "Existed_Raid", 00:15:47.510 "aliases": [ 00:15:47.510 "471cacba-14a8-4c94-b5e3-1da5c19559da" 00:15:47.510 ], 00:15:47.510 "product_name": "Raid Volume", 00:15:47.510 "block_size": 512, 00:15:47.510 "num_blocks": 131072, 00:15:47.510 "uuid": "471cacba-14a8-4c94-b5e3-1da5c19559da", 00:15:47.510 "assigned_rate_limits": { 00:15:47.510 "rw_ios_per_sec": 0, 00:15:47.510 "rw_mbytes_per_sec": 0, 00:15:47.510 "r_mbytes_per_sec": 0, 00:15:47.510 "w_mbytes_per_sec": 0 00:15:47.510 }, 00:15:47.510 "claimed": false, 00:15:47.510 "zoned": false, 00:15:47.511 "supported_io_types": { 00:15:47.511 "read": true, 00:15:47.511 "write": true, 00:15:47.511 "unmap": false, 00:15:47.511 "flush": false, 00:15:47.511 "reset": true, 00:15:47.511 "nvme_admin": false, 00:15:47.511 "nvme_io": false, 00:15:47.511 "nvme_io_md": false, 00:15:47.511 "write_zeroes": true, 00:15:47.511 "zcopy": false, 00:15:47.511 "get_zone_info": false, 00:15:47.511 "zone_management": false, 00:15:47.511 "zone_append": false, 00:15:47.511 "compare": false, 00:15:47.511 "compare_and_write": false, 00:15:47.511 "abort": false, 00:15:47.511 "seek_hole": false, 00:15:47.511 "seek_data": false, 00:15:47.511 "copy": false, 00:15:47.511 "nvme_iov_md": false 00:15:47.511 }, 00:15:47.511 "driver_specific": { 00:15:47.511 "raid": { 00:15:47.511 "uuid": "471cacba-14a8-4c94-b5e3-1da5c19559da", 00:15:47.511 "strip_size_kb": 64, 00:15:47.511 "state": "online", 00:15:47.511 "raid_level": "raid5f", 00:15:47.511 "superblock": false, 00:15:47.511 "num_base_bdevs": 3, 00:15:47.511 "num_base_bdevs_discovered": 3, 00:15:47.511 "num_base_bdevs_operational": 3, 00:15:47.511 "base_bdevs_list": [ 00:15:47.511 { 00:15:47.511 "name": "NewBaseBdev", 00:15:47.511 "uuid": "aad8b33b-8345-4f9b-8947-520b8102d4b5", 00:15:47.511 "is_configured": true, 00:15:47.511 "data_offset": 0, 00:15:47.511 "data_size": 65536 00:15:47.511 }, 00:15:47.511 { 00:15:47.511 "name": "BaseBdev2", 00:15:47.511 "uuid": "4a10b2b5-ffe4-4e9a-a8df-012c2c4ca0e8", 00:15:47.511 "is_configured": true, 00:15:47.511 "data_offset": 0, 00:15:47.511 "data_size": 65536 00:15:47.511 }, 00:15:47.511 { 00:15:47.511 "name": "BaseBdev3", 00:15:47.511 "uuid": "2e91edad-0856-452a-930b-561f713f9c64", 00:15:47.511 "is_configured": true, 00:15:47.511 "data_offset": 0, 00:15:47.511 "data_size": 65536 00:15:47.511 } 00:15:47.511 ] 00:15:47.511 } 00:15:47.511 } 00:15:47.511 }' 00:15:47.511 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:47.511 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:47.511 BaseBdev2 00:15:47.511 BaseBdev3' 00:15:47.511 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.769 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:47.769 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.769 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:47.769 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.769 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.769 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.769 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.769 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:47.769 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:47.769 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.769 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.769 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:47.769 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.769 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.769 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.769 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:47.769 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:47.769 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:47.769 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:47.769 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.769 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.769 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:47.769 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.769 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:47.769 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:47.769 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:47.769 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.769 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.769 [2024-10-09 03:18:30.937043] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:47.769 [2024-10-09 03:18:30.937066] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:47.769 [2024-10-09 03:18:30.937135] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:47.769 [2024-10-09 03:18:30.937426] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:47.769 [2024-10-09 03:18:30.937446] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:47.769 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.769 03:18:30 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80152 00:15:47.770 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 80152 ']' 00:15:47.770 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 80152 00:15:47.770 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:15:47.770 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:47.770 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80152 00:15:47.770 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:47.770 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:47.770 killing process with pid 80152 00:15:47.770 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80152' 00:15:47.770 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 80152 00:15:47.770 [2024-10-09 03:18:30.985118] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:47.770 03:18:30 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 80152 00:15:48.029 [2024-10-09 03:18:31.303138] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:49.408 03:18:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:15:49.408 00:15:49.408 real 0m10.997s 00:15:49.408 user 0m17.199s 00:15:49.408 sys 0m2.034s 00:15:49.408 03:18:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:49.408 03:18:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:49.408 ************************************ 00:15:49.408 END TEST raid5f_state_function_test 00:15:49.408 ************************************ 00:15:49.408 03:18:32 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:15:49.408 03:18:32 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:15:49.408 03:18:32 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:49.408 03:18:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:49.668 ************************************ 00:15:49.668 START TEST raid5f_state_function_test_sb 00:15:49.668 ************************************ 00:15:49.668 03:18:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 3 true 00:15:49.668 03:18:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:15:49.668 03:18:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:15:49.668 03:18:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:15:49.668 03:18:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:15:49.668 03:18:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:15:49.668 03:18:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:49.668 03:18:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:15:49.668 03:18:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:49.668 03:18:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:49.668 03:18:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:15:49.668 03:18:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:49.668 03:18:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:49.668 03:18:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:15:49.668 03:18:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:15:49.668 03:18:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:15:49.668 03:18:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:15:49.668 03:18:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:15:49.668 03:18:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:15:49.668 03:18:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:15:49.668 03:18:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:15:49.668 03:18:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:15:49.668 03:18:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:15:49.668 03:18:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:15:49.668 03:18:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:15:49.668 03:18:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:15:49.668 03:18:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:15:49.668 Process raid pid: 80779 00:15:49.668 03:18:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80779 00:15:49.668 03:18:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:15:49.668 03:18:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80779' 00:15:49.668 03:18:32 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80779 00:15:49.668 03:18:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 80779 ']' 00:15:49.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:49.668 03:18:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:49.668 03:18:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:49.668 03:18:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:49.668 03:18:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:49.668 03:18:32 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:49.668 [2024-10-09 03:18:32.812988] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:15:49.668 [2024-10-09 03:18:32.813188] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:49.927 [2024-10-09 03:18:32.982170] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.928 [2024-10-09 03:18:33.229866] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:15:50.187 [2024-10-09 03:18:33.464046] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:50.187 [2024-10-09 03:18:33.464090] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:50.447 03:18:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:50.447 03:18:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:15:50.447 03:18:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:50.447 03:18:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.447 03:18:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.447 [2024-10-09 03:18:33.655051] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:50.447 [2024-10-09 03:18:33.655190] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:50.447 [2024-10-09 03:18:33.655222] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:50.447 [2024-10-09 03:18:33.655248] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:50.447 [2024-10-09 03:18:33.655265] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:50.447 [2024-10-09 03:18:33.655286] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:50.447 03:18:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.447 03:18:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:50.447 03:18:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:50.447 03:18:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:50.447 03:18:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:50.447 03:18:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:50.447 03:18:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:50.447 03:18:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:50.447 03:18:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:50.447 03:18:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:50.447 03:18:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:50.447 03:18:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:50.447 03:18:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:50.447 03:18:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.447 03:18:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.447 03:18:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.447 03:18:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:50.447 "name": "Existed_Raid", 00:15:50.447 "uuid": "eb11c7e0-451c-470a-b0d1-543866a80e56", 00:15:50.447 "strip_size_kb": 64, 00:15:50.447 "state": "configuring", 00:15:50.447 "raid_level": "raid5f", 00:15:50.447 "superblock": true, 00:15:50.447 "num_base_bdevs": 3, 00:15:50.447 "num_base_bdevs_discovered": 0, 00:15:50.447 "num_base_bdevs_operational": 3, 00:15:50.447 "base_bdevs_list": [ 00:15:50.447 { 00:15:50.447 "name": "BaseBdev1", 00:15:50.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.447 "is_configured": false, 00:15:50.447 "data_offset": 0, 00:15:50.447 "data_size": 0 00:15:50.447 }, 00:15:50.447 { 00:15:50.447 "name": "BaseBdev2", 00:15:50.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.447 "is_configured": false, 00:15:50.447 "data_offset": 0, 00:15:50.447 "data_size": 0 00:15:50.447 }, 00:15:50.447 { 00:15:50.447 "name": "BaseBdev3", 00:15:50.447 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.447 "is_configured": false, 00:15:50.447 "data_offset": 0, 00:15:50.447 "data_size": 0 00:15:50.447 } 00:15:50.447 ] 00:15:50.447 }' 00:15:50.447 03:18:33 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:50.447 03:18:33 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.017 03:18:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:51.017 03:18:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.017 03:18:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.017 [2024-10-09 03:18:34.034522] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:51.017 [2024-10-09 03:18:34.034609] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:15:51.017 03:18:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.017 03:18:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:51.017 03:18:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.017 03:18:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.017 [2024-10-09 03:18:34.046527] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:51.017 [2024-10-09 03:18:34.046608] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:51.017 [2024-10-09 03:18:34.046634] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:51.017 [2024-10-09 03:18:34.046656] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:51.017 [2024-10-09 03:18:34.046672] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:51.017 [2024-10-09 03:18:34.046692] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:51.017 03:18:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.017 03:18:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:51.017 03:18:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.017 03:18:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.017 [2024-10-09 03:18:34.133958] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:51.017 BaseBdev1 00:15:51.017 03:18:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.017 03:18:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:15:51.017 03:18:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:51.017 03:18:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:51.017 03:18:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:51.017 03:18:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:51.017 03:18:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:51.017 03:18:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:51.017 03:18:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.017 03:18:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.017 03:18:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.017 03:18:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:51.017 03:18:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.017 03:18:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.017 [ 00:15:51.017 { 00:15:51.017 "name": "BaseBdev1", 00:15:51.017 "aliases": [ 00:15:51.017 "2beb5196-114a-458e-a4c7-899a1c247543" 00:15:51.017 ], 00:15:51.017 "product_name": "Malloc disk", 00:15:51.017 "block_size": 512, 00:15:51.017 "num_blocks": 65536, 00:15:51.017 "uuid": "2beb5196-114a-458e-a4c7-899a1c247543", 00:15:51.017 "assigned_rate_limits": { 00:15:51.017 "rw_ios_per_sec": 0, 00:15:51.017 "rw_mbytes_per_sec": 0, 00:15:51.017 "r_mbytes_per_sec": 0, 00:15:51.017 "w_mbytes_per_sec": 0 00:15:51.017 }, 00:15:51.017 "claimed": true, 00:15:51.017 "claim_type": "exclusive_write", 00:15:51.017 "zoned": false, 00:15:51.017 "supported_io_types": { 00:15:51.017 "read": true, 00:15:51.017 "write": true, 00:15:51.017 "unmap": true, 00:15:51.017 "flush": true, 00:15:51.017 "reset": true, 00:15:51.017 "nvme_admin": false, 00:15:51.017 "nvme_io": false, 00:15:51.017 "nvme_io_md": false, 00:15:51.017 "write_zeroes": true, 00:15:51.017 "zcopy": true, 00:15:51.017 "get_zone_info": false, 00:15:51.017 "zone_management": false, 00:15:51.017 "zone_append": false, 00:15:51.017 "compare": false, 00:15:51.017 "compare_and_write": false, 00:15:51.017 "abort": true, 00:15:51.017 "seek_hole": false, 00:15:51.017 "seek_data": false, 00:15:51.018 "copy": true, 00:15:51.018 "nvme_iov_md": false 00:15:51.018 }, 00:15:51.018 "memory_domains": [ 00:15:51.018 { 00:15:51.018 "dma_device_id": "system", 00:15:51.018 "dma_device_type": 1 00:15:51.018 }, 00:15:51.018 { 00:15:51.018 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.018 "dma_device_type": 2 00:15:51.018 } 00:15:51.018 ], 00:15:51.018 "driver_specific": {} 00:15:51.018 } 00:15:51.018 ] 00:15:51.018 03:18:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.018 03:18:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:51.018 03:18:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:51.018 03:18:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.018 03:18:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.018 03:18:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.018 03:18:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.018 03:18:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:51.018 03:18:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.018 03:18:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.018 03:18:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.018 03:18:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.018 03:18:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.018 03:18:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.018 03:18:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.018 03:18:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.018 03:18:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.018 03:18:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.018 "name": "Existed_Raid", 00:15:51.018 "uuid": "ff70575b-3d53-41d9-b7b9-6fc7f8ceaa00", 00:15:51.018 "strip_size_kb": 64, 00:15:51.018 "state": "configuring", 00:15:51.018 "raid_level": "raid5f", 00:15:51.018 "superblock": true, 00:15:51.018 "num_base_bdevs": 3, 00:15:51.018 "num_base_bdevs_discovered": 1, 00:15:51.018 "num_base_bdevs_operational": 3, 00:15:51.018 "base_bdevs_list": [ 00:15:51.018 { 00:15:51.018 "name": "BaseBdev1", 00:15:51.018 "uuid": "2beb5196-114a-458e-a4c7-899a1c247543", 00:15:51.018 "is_configured": true, 00:15:51.018 "data_offset": 2048, 00:15:51.018 "data_size": 63488 00:15:51.018 }, 00:15:51.018 { 00:15:51.018 "name": "BaseBdev2", 00:15:51.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.018 "is_configured": false, 00:15:51.018 "data_offset": 0, 00:15:51.018 "data_size": 0 00:15:51.018 }, 00:15:51.018 { 00:15:51.018 "name": "BaseBdev3", 00:15:51.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.018 "is_configured": false, 00:15:51.018 "data_offset": 0, 00:15:51.018 "data_size": 0 00:15:51.018 } 00:15:51.018 ] 00:15:51.018 }' 00:15:51.018 03:18:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.018 03:18:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.588 03:18:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:51.588 03:18:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.588 03:18:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.588 [2024-10-09 03:18:34.641106] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:51.588 [2024-10-09 03:18:34.641215] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:15:51.588 03:18:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.588 03:18:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:51.588 03:18:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.588 03:18:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.588 [2024-10-09 03:18:34.653172] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:51.588 [2024-10-09 03:18:34.655237] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:51.588 [2024-10-09 03:18:34.655284] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:51.588 [2024-10-09 03:18:34.655294] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:51.588 [2024-10-09 03:18:34.655303] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:51.588 03:18:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.588 03:18:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:15:51.588 03:18:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:51.588 03:18:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:51.588 03:18:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:51.588 03:18:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:51.588 03:18:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:51.588 03:18:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:51.588 03:18:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:51.588 03:18:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:51.588 03:18:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:51.588 03:18:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:51.588 03:18:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:51.588 03:18:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:51.588 03:18:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.588 03:18:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.588 03:18:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.588 03:18:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.588 03:18:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:51.588 "name": "Existed_Raid", 00:15:51.588 "uuid": "6d56271d-023c-46ee-8097-b960fefe192b", 00:15:51.588 "strip_size_kb": 64, 00:15:51.588 "state": "configuring", 00:15:51.588 "raid_level": "raid5f", 00:15:51.588 "superblock": true, 00:15:51.588 "num_base_bdevs": 3, 00:15:51.588 "num_base_bdevs_discovered": 1, 00:15:51.588 "num_base_bdevs_operational": 3, 00:15:51.588 "base_bdevs_list": [ 00:15:51.588 { 00:15:51.588 "name": "BaseBdev1", 00:15:51.588 "uuid": "2beb5196-114a-458e-a4c7-899a1c247543", 00:15:51.588 "is_configured": true, 00:15:51.588 "data_offset": 2048, 00:15:51.588 "data_size": 63488 00:15:51.588 }, 00:15:51.588 { 00:15:51.588 "name": "BaseBdev2", 00:15:51.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.588 "is_configured": false, 00:15:51.588 "data_offset": 0, 00:15:51.588 "data_size": 0 00:15:51.588 }, 00:15:51.588 { 00:15:51.588 "name": "BaseBdev3", 00:15:51.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.588 "is_configured": false, 00:15:51.588 "data_offset": 0, 00:15:51.588 "data_size": 0 00:15:51.588 } 00:15:51.588 ] 00:15:51.588 }' 00:15:51.588 03:18:34 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:51.588 03:18:34 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.848 03:18:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:51.848 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.848 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:51.848 [2024-10-09 03:18:35.143047] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:51.848 BaseBdev2 00:15:51.848 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:51.848 03:18:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:15:51.848 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:51.848 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:51.848 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:51.848 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:51.848 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:51.848 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:51.848 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:51.848 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.107 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.107 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:52.107 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.107 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.107 [ 00:15:52.107 { 00:15:52.107 "name": "BaseBdev2", 00:15:52.107 "aliases": [ 00:15:52.107 "988feb0a-c908-4d78-8c3c-6d8040d3483e" 00:15:52.107 ], 00:15:52.107 "product_name": "Malloc disk", 00:15:52.107 "block_size": 512, 00:15:52.107 "num_blocks": 65536, 00:15:52.107 "uuid": "988feb0a-c908-4d78-8c3c-6d8040d3483e", 00:15:52.107 "assigned_rate_limits": { 00:15:52.107 "rw_ios_per_sec": 0, 00:15:52.107 "rw_mbytes_per_sec": 0, 00:15:52.107 "r_mbytes_per_sec": 0, 00:15:52.107 "w_mbytes_per_sec": 0 00:15:52.107 }, 00:15:52.107 "claimed": true, 00:15:52.107 "claim_type": "exclusive_write", 00:15:52.107 "zoned": false, 00:15:52.108 "supported_io_types": { 00:15:52.108 "read": true, 00:15:52.108 "write": true, 00:15:52.108 "unmap": true, 00:15:52.108 "flush": true, 00:15:52.108 "reset": true, 00:15:52.108 "nvme_admin": false, 00:15:52.108 "nvme_io": false, 00:15:52.108 "nvme_io_md": false, 00:15:52.108 "write_zeroes": true, 00:15:52.108 "zcopy": true, 00:15:52.108 "get_zone_info": false, 00:15:52.108 "zone_management": false, 00:15:52.108 "zone_append": false, 00:15:52.108 "compare": false, 00:15:52.108 "compare_and_write": false, 00:15:52.108 "abort": true, 00:15:52.108 "seek_hole": false, 00:15:52.108 "seek_data": false, 00:15:52.108 "copy": true, 00:15:52.108 "nvme_iov_md": false 00:15:52.108 }, 00:15:52.108 "memory_domains": [ 00:15:52.108 { 00:15:52.108 "dma_device_id": "system", 00:15:52.108 "dma_device_type": 1 00:15:52.108 }, 00:15:52.108 { 00:15:52.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.108 "dma_device_type": 2 00:15:52.108 } 00:15:52.108 ], 00:15:52.108 "driver_specific": {} 00:15:52.108 } 00:15:52.108 ] 00:15:52.108 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.108 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:52.108 03:18:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:52.108 03:18:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:52.108 03:18:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:52.108 03:18:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.108 03:18:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:52.108 03:18:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.108 03:18:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.108 03:18:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:52.108 03:18:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.108 03:18:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.108 03:18:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.108 03:18:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.108 03:18:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.108 03:18:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.108 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.108 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.108 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.108 03:18:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.108 "name": "Existed_Raid", 00:15:52.108 "uuid": "6d56271d-023c-46ee-8097-b960fefe192b", 00:15:52.108 "strip_size_kb": 64, 00:15:52.108 "state": "configuring", 00:15:52.108 "raid_level": "raid5f", 00:15:52.108 "superblock": true, 00:15:52.108 "num_base_bdevs": 3, 00:15:52.108 "num_base_bdevs_discovered": 2, 00:15:52.108 "num_base_bdevs_operational": 3, 00:15:52.108 "base_bdevs_list": [ 00:15:52.108 { 00:15:52.108 "name": "BaseBdev1", 00:15:52.108 "uuid": "2beb5196-114a-458e-a4c7-899a1c247543", 00:15:52.108 "is_configured": true, 00:15:52.108 "data_offset": 2048, 00:15:52.108 "data_size": 63488 00:15:52.108 }, 00:15:52.108 { 00:15:52.108 "name": "BaseBdev2", 00:15:52.108 "uuid": "988feb0a-c908-4d78-8c3c-6d8040d3483e", 00:15:52.108 "is_configured": true, 00:15:52.108 "data_offset": 2048, 00:15:52.108 "data_size": 63488 00:15:52.108 }, 00:15:52.108 { 00:15:52.108 "name": "BaseBdev3", 00:15:52.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:52.108 "is_configured": false, 00:15:52.108 "data_offset": 0, 00:15:52.108 "data_size": 0 00:15:52.108 } 00:15:52.108 ] 00:15:52.108 }' 00:15:52.108 03:18:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.108 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.367 03:18:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:52.367 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.368 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.627 [2024-10-09 03:18:35.674620] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:52.627 [2024-10-09 03:18:35.675033] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:15:52.627 [2024-10-09 03:18:35.675096] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:52.627 [2024-10-09 03:18:35.675407] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:52.627 BaseBdev3 00:15:52.627 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.627 03:18:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:15:52.627 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:52.627 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:52.627 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:52.627 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:52.627 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:52.627 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:52.627 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.627 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.627 [2024-10-09 03:18:35.680921] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:15:52.627 [2024-10-09 03:18:35.680978] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:15:52.627 [2024-10-09 03:18:35.681183] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:52.627 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.627 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:52.627 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.627 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.627 [ 00:15:52.627 { 00:15:52.627 "name": "BaseBdev3", 00:15:52.627 "aliases": [ 00:15:52.628 "fac08a5f-6265-4880-9917-47ac8bf0e8d1" 00:15:52.628 ], 00:15:52.628 "product_name": "Malloc disk", 00:15:52.628 "block_size": 512, 00:15:52.628 "num_blocks": 65536, 00:15:52.628 "uuid": "fac08a5f-6265-4880-9917-47ac8bf0e8d1", 00:15:52.628 "assigned_rate_limits": { 00:15:52.628 "rw_ios_per_sec": 0, 00:15:52.628 "rw_mbytes_per_sec": 0, 00:15:52.628 "r_mbytes_per_sec": 0, 00:15:52.628 "w_mbytes_per_sec": 0 00:15:52.628 }, 00:15:52.628 "claimed": true, 00:15:52.628 "claim_type": "exclusive_write", 00:15:52.628 "zoned": false, 00:15:52.628 "supported_io_types": { 00:15:52.628 "read": true, 00:15:52.628 "write": true, 00:15:52.628 "unmap": true, 00:15:52.628 "flush": true, 00:15:52.628 "reset": true, 00:15:52.628 "nvme_admin": false, 00:15:52.628 "nvme_io": false, 00:15:52.628 "nvme_io_md": false, 00:15:52.628 "write_zeroes": true, 00:15:52.628 "zcopy": true, 00:15:52.628 "get_zone_info": false, 00:15:52.628 "zone_management": false, 00:15:52.628 "zone_append": false, 00:15:52.628 "compare": false, 00:15:52.628 "compare_and_write": false, 00:15:52.628 "abort": true, 00:15:52.628 "seek_hole": false, 00:15:52.628 "seek_data": false, 00:15:52.628 "copy": true, 00:15:52.628 "nvme_iov_md": false 00:15:52.628 }, 00:15:52.628 "memory_domains": [ 00:15:52.628 { 00:15:52.628 "dma_device_id": "system", 00:15:52.628 "dma_device_type": 1 00:15:52.628 }, 00:15:52.628 { 00:15:52.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:52.628 "dma_device_type": 2 00:15:52.628 } 00:15:52.628 ], 00:15:52.628 "driver_specific": {} 00:15:52.628 } 00:15:52.628 ] 00:15:52.628 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.628 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:52.628 03:18:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:15:52.628 03:18:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:15:52.628 03:18:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:52.628 03:18:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:52.628 03:18:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:52.628 03:18:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:52.628 03:18:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:52.628 03:18:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:52.628 03:18:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:52.628 03:18:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:52.628 03:18:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:52.628 03:18:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:52.628 03:18:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:52.628 03:18:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.628 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.628 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.628 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.628 03:18:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:52.628 "name": "Existed_Raid", 00:15:52.628 "uuid": "6d56271d-023c-46ee-8097-b960fefe192b", 00:15:52.628 "strip_size_kb": 64, 00:15:52.628 "state": "online", 00:15:52.628 "raid_level": "raid5f", 00:15:52.628 "superblock": true, 00:15:52.628 "num_base_bdevs": 3, 00:15:52.628 "num_base_bdevs_discovered": 3, 00:15:52.628 "num_base_bdevs_operational": 3, 00:15:52.628 "base_bdevs_list": [ 00:15:52.628 { 00:15:52.628 "name": "BaseBdev1", 00:15:52.628 "uuid": "2beb5196-114a-458e-a4c7-899a1c247543", 00:15:52.628 "is_configured": true, 00:15:52.628 "data_offset": 2048, 00:15:52.628 "data_size": 63488 00:15:52.628 }, 00:15:52.628 { 00:15:52.628 "name": "BaseBdev2", 00:15:52.628 "uuid": "988feb0a-c908-4d78-8c3c-6d8040d3483e", 00:15:52.628 "is_configured": true, 00:15:52.628 "data_offset": 2048, 00:15:52.628 "data_size": 63488 00:15:52.628 }, 00:15:52.628 { 00:15:52.628 "name": "BaseBdev3", 00:15:52.628 "uuid": "fac08a5f-6265-4880-9917-47ac8bf0e8d1", 00:15:52.628 "is_configured": true, 00:15:52.628 "data_offset": 2048, 00:15:52.628 "data_size": 63488 00:15:52.628 } 00:15:52.628 ] 00:15:52.628 }' 00:15:52.628 03:18:35 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:52.628 03:18:35 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.198 [2024-10-09 03:18:36.207404] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:53.198 "name": "Existed_Raid", 00:15:53.198 "aliases": [ 00:15:53.198 "6d56271d-023c-46ee-8097-b960fefe192b" 00:15:53.198 ], 00:15:53.198 "product_name": "Raid Volume", 00:15:53.198 "block_size": 512, 00:15:53.198 "num_blocks": 126976, 00:15:53.198 "uuid": "6d56271d-023c-46ee-8097-b960fefe192b", 00:15:53.198 "assigned_rate_limits": { 00:15:53.198 "rw_ios_per_sec": 0, 00:15:53.198 "rw_mbytes_per_sec": 0, 00:15:53.198 "r_mbytes_per_sec": 0, 00:15:53.198 "w_mbytes_per_sec": 0 00:15:53.198 }, 00:15:53.198 "claimed": false, 00:15:53.198 "zoned": false, 00:15:53.198 "supported_io_types": { 00:15:53.198 "read": true, 00:15:53.198 "write": true, 00:15:53.198 "unmap": false, 00:15:53.198 "flush": false, 00:15:53.198 "reset": true, 00:15:53.198 "nvme_admin": false, 00:15:53.198 "nvme_io": false, 00:15:53.198 "nvme_io_md": false, 00:15:53.198 "write_zeroes": true, 00:15:53.198 "zcopy": false, 00:15:53.198 "get_zone_info": false, 00:15:53.198 "zone_management": false, 00:15:53.198 "zone_append": false, 00:15:53.198 "compare": false, 00:15:53.198 "compare_and_write": false, 00:15:53.198 "abort": false, 00:15:53.198 "seek_hole": false, 00:15:53.198 "seek_data": false, 00:15:53.198 "copy": false, 00:15:53.198 "nvme_iov_md": false 00:15:53.198 }, 00:15:53.198 "driver_specific": { 00:15:53.198 "raid": { 00:15:53.198 "uuid": "6d56271d-023c-46ee-8097-b960fefe192b", 00:15:53.198 "strip_size_kb": 64, 00:15:53.198 "state": "online", 00:15:53.198 "raid_level": "raid5f", 00:15:53.198 "superblock": true, 00:15:53.198 "num_base_bdevs": 3, 00:15:53.198 "num_base_bdevs_discovered": 3, 00:15:53.198 "num_base_bdevs_operational": 3, 00:15:53.198 "base_bdevs_list": [ 00:15:53.198 { 00:15:53.198 "name": "BaseBdev1", 00:15:53.198 "uuid": "2beb5196-114a-458e-a4c7-899a1c247543", 00:15:53.198 "is_configured": true, 00:15:53.198 "data_offset": 2048, 00:15:53.198 "data_size": 63488 00:15:53.198 }, 00:15:53.198 { 00:15:53.198 "name": "BaseBdev2", 00:15:53.198 "uuid": "988feb0a-c908-4d78-8c3c-6d8040d3483e", 00:15:53.198 "is_configured": true, 00:15:53.198 "data_offset": 2048, 00:15:53.198 "data_size": 63488 00:15:53.198 }, 00:15:53.198 { 00:15:53.198 "name": "BaseBdev3", 00:15:53.198 "uuid": "fac08a5f-6265-4880-9917-47ac8bf0e8d1", 00:15:53.198 "is_configured": true, 00:15:53.198 "data_offset": 2048, 00:15:53.198 "data_size": 63488 00:15:53.198 } 00:15:53.198 ] 00:15:53.198 } 00:15:53.198 } 00:15:53.198 }' 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:15:53.198 BaseBdev2 00:15:53.198 BaseBdev3' 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.198 03:18:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.198 [2024-10-09 03:18:36.486756] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:53.460 03:18:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.460 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:15:53.460 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:15:53.460 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:15:53.460 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:15:53.460 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:15:53.460 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:15:53.460 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:53.460 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:53.460 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:53.460 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:53.460 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:53.460 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:53.460 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:53.460 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:53.460 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:53.460 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:53.460 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:53.460 03:18:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.460 03:18:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.460 03:18:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.460 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:53.460 "name": "Existed_Raid", 00:15:53.460 "uuid": "6d56271d-023c-46ee-8097-b960fefe192b", 00:15:53.460 "strip_size_kb": 64, 00:15:53.460 "state": "online", 00:15:53.460 "raid_level": "raid5f", 00:15:53.460 "superblock": true, 00:15:53.460 "num_base_bdevs": 3, 00:15:53.460 "num_base_bdevs_discovered": 2, 00:15:53.460 "num_base_bdevs_operational": 2, 00:15:53.460 "base_bdevs_list": [ 00:15:53.460 { 00:15:53.460 "name": null, 00:15:53.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.460 "is_configured": false, 00:15:53.460 "data_offset": 0, 00:15:53.460 "data_size": 63488 00:15:53.460 }, 00:15:53.460 { 00:15:53.460 "name": "BaseBdev2", 00:15:53.460 "uuid": "988feb0a-c908-4d78-8c3c-6d8040d3483e", 00:15:53.460 "is_configured": true, 00:15:53.460 "data_offset": 2048, 00:15:53.460 "data_size": 63488 00:15:53.460 }, 00:15:53.460 { 00:15:53.460 "name": "BaseBdev3", 00:15:53.460 "uuid": "fac08a5f-6265-4880-9917-47ac8bf0e8d1", 00:15:53.460 "is_configured": true, 00:15:53.460 "data_offset": 2048, 00:15:53.460 "data_size": 63488 00:15:53.460 } 00:15:53.460 ] 00:15:53.460 }' 00:15:53.460 03:18:36 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:53.460 03:18:36 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.048 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:15:54.048 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:54.048 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.048 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.048 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.048 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:54.048 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.048 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:54.048 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:54.048 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:15:54.048 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.048 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.048 [2024-10-09 03:18:37.090270] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:54.048 [2024-10-09 03:18:37.090463] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:54.048 [2024-10-09 03:18:37.186933] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:54.048 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.048 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:54.048 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:54.048 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.048 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:15:54.048 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.048 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.048 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.048 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:15:54.048 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:54.048 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:15:54.048 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.048 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.048 [2024-10-09 03:18:37.246839] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:54.048 [2024-10-09 03:18:37.246908] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:15:54.048 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.048 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:15:54.048 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:15:54.307 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.308 BaseBdev2 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.308 [ 00:15:54.308 { 00:15:54.308 "name": "BaseBdev2", 00:15:54.308 "aliases": [ 00:15:54.308 "e0f35c20-7c69-474b-a2ed-cb6f76a8a8d9" 00:15:54.308 ], 00:15:54.308 "product_name": "Malloc disk", 00:15:54.308 "block_size": 512, 00:15:54.308 "num_blocks": 65536, 00:15:54.308 "uuid": "e0f35c20-7c69-474b-a2ed-cb6f76a8a8d9", 00:15:54.308 "assigned_rate_limits": { 00:15:54.308 "rw_ios_per_sec": 0, 00:15:54.308 "rw_mbytes_per_sec": 0, 00:15:54.308 "r_mbytes_per_sec": 0, 00:15:54.308 "w_mbytes_per_sec": 0 00:15:54.308 }, 00:15:54.308 "claimed": false, 00:15:54.308 "zoned": false, 00:15:54.308 "supported_io_types": { 00:15:54.308 "read": true, 00:15:54.308 "write": true, 00:15:54.308 "unmap": true, 00:15:54.308 "flush": true, 00:15:54.308 "reset": true, 00:15:54.308 "nvme_admin": false, 00:15:54.308 "nvme_io": false, 00:15:54.308 "nvme_io_md": false, 00:15:54.308 "write_zeroes": true, 00:15:54.308 "zcopy": true, 00:15:54.308 "get_zone_info": false, 00:15:54.308 "zone_management": false, 00:15:54.308 "zone_append": false, 00:15:54.308 "compare": false, 00:15:54.308 "compare_and_write": false, 00:15:54.308 "abort": true, 00:15:54.308 "seek_hole": false, 00:15:54.308 "seek_data": false, 00:15:54.308 "copy": true, 00:15:54.308 "nvme_iov_md": false 00:15:54.308 }, 00:15:54.308 "memory_domains": [ 00:15:54.308 { 00:15:54.308 "dma_device_id": "system", 00:15:54.308 "dma_device_type": 1 00:15:54.308 }, 00:15:54.308 { 00:15:54.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.308 "dma_device_type": 2 00:15:54.308 } 00:15:54.308 ], 00:15:54.308 "driver_specific": {} 00:15:54.308 } 00:15:54.308 ] 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.308 BaseBdev3 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.308 [ 00:15:54.308 { 00:15:54.308 "name": "BaseBdev3", 00:15:54.308 "aliases": [ 00:15:54.308 "0f23747a-3414-4322-bc0e-f03824152e48" 00:15:54.308 ], 00:15:54.308 "product_name": "Malloc disk", 00:15:54.308 "block_size": 512, 00:15:54.308 "num_blocks": 65536, 00:15:54.308 "uuid": "0f23747a-3414-4322-bc0e-f03824152e48", 00:15:54.308 "assigned_rate_limits": { 00:15:54.308 "rw_ios_per_sec": 0, 00:15:54.308 "rw_mbytes_per_sec": 0, 00:15:54.308 "r_mbytes_per_sec": 0, 00:15:54.308 "w_mbytes_per_sec": 0 00:15:54.308 }, 00:15:54.308 "claimed": false, 00:15:54.308 "zoned": false, 00:15:54.308 "supported_io_types": { 00:15:54.308 "read": true, 00:15:54.308 "write": true, 00:15:54.308 "unmap": true, 00:15:54.308 "flush": true, 00:15:54.308 "reset": true, 00:15:54.308 "nvme_admin": false, 00:15:54.308 "nvme_io": false, 00:15:54.308 "nvme_io_md": false, 00:15:54.308 "write_zeroes": true, 00:15:54.308 "zcopy": true, 00:15:54.308 "get_zone_info": false, 00:15:54.308 "zone_management": false, 00:15:54.308 "zone_append": false, 00:15:54.308 "compare": false, 00:15:54.308 "compare_and_write": false, 00:15:54.308 "abort": true, 00:15:54.308 "seek_hole": false, 00:15:54.308 "seek_data": false, 00:15:54.308 "copy": true, 00:15:54.308 "nvme_iov_md": false 00:15:54.308 }, 00:15:54.308 "memory_domains": [ 00:15:54.308 { 00:15:54.308 "dma_device_id": "system", 00:15:54.308 "dma_device_type": 1 00:15:54.308 }, 00:15:54.308 { 00:15:54.308 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.308 "dma_device_type": 2 00:15:54.308 } 00:15:54.308 ], 00:15:54.308 "driver_specific": {} 00:15:54.308 } 00:15:54.308 ] 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.308 [2024-10-09 03:18:37.562528] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:54.308 [2024-10-09 03:18:37.562659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:54.308 [2024-10-09 03:18:37.562701] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:54.308 [2024-10-09 03:18:37.564706] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.308 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.309 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.309 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.309 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.568 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.568 "name": "Existed_Raid", 00:15:54.568 "uuid": "4613de8e-d52b-47c5-9df4-2bc977cd36c5", 00:15:54.568 "strip_size_kb": 64, 00:15:54.568 "state": "configuring", 00:15:54.568 "raid_level": "raid5f", 00:15:54.568 "superblock": true, 00:15:54.568 "num_base_bdevs": 3, 00:15:54.568 "num_base_bdevs_discovered": 2, 00:15:54.568 "num_base_bdevs_operational": 3, 00:15:54.568 "base_bdevs_list": [ 00:15:54.568 { 00:15:54.568 "name": "BaseBdev1", 00:15:54.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.568 "is_configured": false, 00:15:54.568 "data_offset": 0, 00:15:54.568 "data_size": 0 00:15:54.568 }, 00:15:54.568 { 00:15:54.568 "name": "BaseBdev2", 00:15:54.568 "uuid": "e0f35c20-7c69-474b-a2ed-cb6f76a8a8d9", 00:15:54.568 "is_configured": true, 00:15:54.568 "data_offset": 2048, 00:15:54.568 "data_size": 63488 00:15:54.568 }, 00:15:54.568 { 00:15:54.568 "name": "BaseBdev3", 00:15:54.568 "uuid": "0f23747a-3414-4322-bc0e-f03824152e48", 00:15:54.568 "is_configured": true, 00:15:54.568 "data_offset": 2048, 00:15:54.568 "data_size": 63488 00:15:54.568 } 00:15:54.568 ] 00:15:54.568 }' 00:15:54.568 03:18:37 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.568 03:18:37 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.828 03:18:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:54.828 03:18:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.828 03:18:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.828 [2024-10-09 03:18:38.017720] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:54.828 03:18:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.828 03:18:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:54.828 03:18:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:54.828 03:18:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:54.828 03:18:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:54.828 03:18:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:54.828 03:18:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:54.828 03:18:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:54.828 03:18:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:54.828 03:18:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:54.828 03:18:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:54.828 03:18:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:54.828 03:18:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.828 03:18:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.828 03:18:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.828 03:18:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.828 03:18:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:54.828 "name": "Existed_Raid", 00:15:54.828 "uuid": "4613de8e-d52b-47c5-9df4-2bc977cd36c5", 00:15:54.828 "strip_size_kb": 64, 00:15:54.828 "state": "configuring", 00:15:54.828 "raid_level": "raid5f", 00:15:54.828 "superblock": true, 00:15:54.828 "num_base_bdevs": 3, 00:15:54.828 "num_base_bdevs_discovered": 1, 00:15:54.828 "num_base_bdevs_operational": 3, 00:15:54.828 "base_bdevs_list": [ 00:15:54.828 { 00:15:54.828 "name": "BaseBdev1", 00:15:54.828 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.828 "is_configured": false, 00:15:54.828 "data_offset": 0, 00:15:54.828 "data_size": 0 00:15:54.828 }, 00:15:54.828 { 00:15:54.828 "name": null, 00:15:54.828 "uuid": "e0f35c20-7c69-474b-a2ed-cb6f76a8a8d9", 00:15:54.828 "is_configured": false, 00:15:54.828 "data_offset": 0, 00:15:54.828 "data_size": 63488 00:15:54.828 }, 00:15:54.828 { 00:15:54.828 "name": "BaseBdev3", 00:15:54.828 "uuid": "0f23747a-3414-4322-bc0e-f03824152e48", 00:15:54.828 "is_configured": true, 00:15:54.828 "data_offset": 2048, 00:15:54.828 "data_size": 63488 00:15:54.828 } 00:15:54.828 ] 00:15:54.828 }' 00:15:54.828 03:18:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:54.828 03:18:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.397 03:18:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.397 03:18:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:55.397 03:18:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.397 03:18:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.397 03:18:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.397 03:18:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:15:55.397 03:18:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:15:55.397 03:18:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.397 03:18:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.397 [2024-10-09 03:18:38.565500] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:55.397 BaseBdev1 00:15:55.397 03:18:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.397 03:18:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:15:55.397 03:18:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:15:55.397 03:18:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:55.397 03:18:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:55.397 03:18:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:55.397 03:18:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:55.397 03:18:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:55.397 03:18:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.397 03:18:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.397 03:18:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.397 03:18:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:55.397 03:18:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.397 03:18:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.397 [ 00:15:55.397 { 00:15:55.397 "name": "BaseBdev1", 00:15:55.397 "aliases": [ 00:15:55.397 "b012c87b-ddf0-45c3-8c75-b0478ff54735" 00:15:55.397 ], 00:15:55.397 "product_name": "Malloc disk", 00:15:55.397 "block_size": 512, 00:15:55.397 "num_blocks": 65536, 00:15:55.397 "uuid": "b012c87b-ddf0-45c3-8c75-b0478ff54735", 00:15:55.397 "assigned_rate_limits": { 00:15:55.397 "rw_ios_per_sec": 0, 00:15:55.397 "rw_mbytes_per_sec": 0, 00:15:55.397 "r_mbytes_per_sec": 0, 00:15:55.397 "w_mbytes_per_sec": 0 00:15:55.397 }, 00:15:55.397 "claimed": true, 00:15:55.397 "claim_type": "exclusive_write", 00:15:55.397 "zoned": false, 00:15:55.397 "supported_io_types": { 00:15:55.397 "read": true, 00:15:55.397 "write": true, 00:15:55.397 "unmap": true, 00:15:55.397 "flush": true, 00:15:55.397 "reset": true, 00:15:55.397 "nvme_admin": false, 00:15:55.397 "nvme_io": false, 00:15:55.397 "nvme_io_md": false, 00:15:55.397 "write_zeroes": true, 00:15:55.397 "zcopy": true, 00:15:55.397 "get_zone_info": false, 00:15:55.397 "zone_management": false, 00:15:55.397 "zone_append": false, 00:15:55.397 "compare": false, 00:15:55.397 "compare_and_write": false, 00:15:55.397 "abort": true, 00:15:55.397 "seek_hole": false, 00:15:55.397 "seek_data": false, 00:15:55.397 "copy": true, 00:15:55.397 "nvme_iov_md": false 00:15:55.397 }, 00:15:55.397 "memory_domains": [ 00:15:55.397 { 00:15:55.397 "dma_device_id": "system", 00:15:55.397 "dma_device_type": 1 00:15:55.397 }, 00:15:55.397 { 00:15:55.397 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.397 "dma_device_type": 2 00:15:55.397 } 00:15:55.397 ], 00:15:55.397 "driver_specific": {} 00:15:55.397 } 00:15:55.397 ] 00:15:55.397 03:18:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.397 03:18:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:55.397 03:18:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:55.397 03:18:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.397 03:18:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.397 03:18:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.397 03:18:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.398 03:18:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:55.398 03:18:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.398 03:18:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.398 03:18:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.398 03:18:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.398 03:18:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.398 03:18:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.398 03:18:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.398 03:18:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.398 03:18:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.398 03:18:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.398 "name": "Existed_Raid", 00:15:55.398 "uuid": "4613de8e-d52b-47c5-9df4-2bc977cd36c5", 00:15:55.398 "strip_size_kb": 64, 00:15:55.398 "state": "configuring", 00:15:55.398 "raid_level": "raid5f", 00:15:55.398 "superblock": true, 00:15:55.398 "num_base_bdevs": 3, 00:15:55.398 "num_base_bdevs_discovered": 2, 00:15:55.398 "num_base_bdevs_operational": 3, 00:15:55.398 "base_bdevs_list": [ 00:15:55.398 { 00:15:55.398 "name": "BaseBdev1", 00:15:55.398 "uuid": "b012c87b-ddf0-45c3-8c75-b0478ff54735", 00:15:55.398 "is_configured": true, 00:15:55.398 "data_offset": 2048, 00:15:55.398 "data_size": 63488 00:15:55.398 }, 00:15:55.398 { 00:15:55.398 "name": null, 00:15:55.398 "uuid": "e0f35c20-7c69-474b-a2ed-cb6f76a8a8d9", 00:15:55.398 "is_configured": false, 00:15:55.398 "data_offset": 0, 00:15:55.398 "data_size": 63488 00:15:55.398 }, 00:15:55.398 { 00:15:55.398 "name": "BaseBdev3", 00:15:55.398 "uuid": "0f23747a-3414-4322-bc0e-f03824152e48", 00:15:55.398 "is_configured": true, 00:15:55.398 "data_offset": 2048, 00:15:55.398 "data_size": 63488 00:15:55.398 } 00:15:55.398 ] 00:15:55.398 }' 00:15:55.398 03:18:38 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.398 03:18:38 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.965 03:18:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.965 03:18:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.965 03:18:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.965 03:18:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:55.965 03:18:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.965 03:18:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:15:55.965 03:18:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:15:55.965 03:18:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.965 03:18:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.965 [2024-10-09 03:18:39.128623] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:55.965 03:18:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.965 03:18:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:55.965 03:18:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:55.965 03:18:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:55.965 03:18:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:55.965 03:18:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:55.965 03:18:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:55.965 03:18:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:55.965 03:18:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:55.965 03:18:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:55.965 03:18:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:55.965 03:18:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.965 03:18:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:55.965 03:18:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.965 03:18:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.965 03:18:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.965 03:18:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:55.965 "name": "Existed_Raid", 00:15:55.965 "uuid": "4613de8e-d52b-47c5-9df4-2bc977cd36c5", 00:15:55.965 "strip_size_kb": 64, 00:15:55.965 "state": "configuring", 00:15:55.965 "raid_level": "raid5f", 00:15:55.965 "superblock": true, 00:15:55.965 "num_base_bdevs": 3, 00:15:55.965 "num_base_bdevs_discovered": 1, 00:15:55.965 "num_base_bdevs_operational": 3, 00:15:55.965 "base_bdevs_list": [ 00:15:55.965 { 00:15:55.965 "name": "BaseBdev1", 00:15:55.965 "uuid": "b012c87b-ddf0-45c3-8c75-b0478ff54735", 00:15:55.965 "is_configured": true, 00:15:55.965 "data_offset": 2048, 00:15:55.965 "data_size": 63488 00:15:55.965 }, 00:15:55.965 { 00:15:55.965 "name": null, 00:15:55.965 "uuid": "e0f35c20-7c69-474b-a2ed-cb6f76a8a8d9", 00:15:55.965 "is_configured": false, 00:15:55.965 "data_offset": 0, 00:15:55.965 "data_size": 63488 00:15:55.965 }, 00:15:55.965 { 00:15:55.965 "name": null, 00:15:55.965 "uuid": "0f23747a-3414-4322-bc0e-f03824152e48", 00:15:55.965 "is_configured": false, 00:15:55.965 "data_offset": 0, 00:15:55.965 "data_size": 63488 00:15:55.965 } 00:15:55.965 ] 00:15:55.965 }' 00:15:55.965 03:18:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:55.965 03:18:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.534 03:18:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:56.534 03:18:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.535 03:18:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.535 03:18:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.535 03:18:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.535 03:18:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:15:56.535 03:18:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:56.535 03:18:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.535 03:18:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.535 [2024-10-09 03:18:39.639760] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:56.535 03:18:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.535 03:18:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:56.535 03:18:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:56.535 03:18:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:56.535 03:18:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:56.535 03:18:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:56.535 03:18:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:56.535 03:18:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:56.535 03:18:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:56.535 03:18:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:56.535 03:18:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:56.535 03:18:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:56.535 03:18:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.535 03:18:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.535 03:18:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.535 03:18:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.535 03:18:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:56.535 "name": "Existed_Raid", 00:15:56.535 "uuid": "4613de8e-d52b-47c5-9df4-2bc977cd36c5", 00:15:56.535 "strip_size_kb": 64, 00:15:56.535 "state": "configuring", 00:15:56.535 "raid_level": "raid5f", 00:15:56.535 "superblock": true, 00:15:56.535 "num_base_bdevs": 3, 00:15:56.535 "num_base_bdevs_discovered": 2, 00:15:56.535 "num_base_bdevs_operational": 3, 00:15:56.535 "base_bdevs_list": [ 00:15:56.535 { 00:15:56.535 "name": "BaseBdev1", 00:15:56.535 "uuid": "b012c87b-ddf0-45c3-8c75-b0478ff54735", 00:15:56.535 "is_configured": true, 00:15:56.535 "data_offset": 2048, 00:15:56.535 "data_size": 63488 00:15:56.535 }, 00:15:56.535 { 00:15:56.535 "name": null, 00:15:56.535 "uuid": "e0f35c20-7c69-474b-a2ed-cb6f76a8a8d9", 00:15:56.535 "is_configured": false, 00:15:56.535 "data_offset": 0, 00:15:56.535 "data_size": 63488 00:15:56.535 }, 00:15:56.535 { 00:15:56.535 "name": "BaseBdev3", 00:15:56.535 "uuid": "0f23747a-3414-4322-bc0e-f03824152e48", 00:15:56.535 "is_configured": true, 00:15:56.535 "data_offset": 2048, 00:15:56.535 "data_size": 63488 00:15:56.535 } 00:15:56.535 ] 00:15:56.535 }' 00:15:56.535 03:18:39 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:56.535 03:18:39 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.114 03:18:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.114 03:18:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.114 03:18:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:57.114 03:18:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.114 03:18:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.114 03:18:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:15:57.114 03:18:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:15:57.114 03:18:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.114 03:18:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.114 [2024-10-09 03:18:40.158973] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:57.114 03:18:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.115 03:18:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:57.115 03:18:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.115 03:18:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.115 03:18:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.115 03:18:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.115 03:18:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:57.115 03:18:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.115 03:18:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.115 03:18:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.115 03:18:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.115 03:18:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.115 03:18:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.115 03:18:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.115 03:18:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.115 03:18:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.115 03:18:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.115 "name": "Existed_Raid", 00:15:57.115 "uuid": "4613de8e-d52b-47c5-9df4-2bc977cd36c5", 00:15:57.115 "strip_size_kb": 64, 00:15:57.115 "state": "configuring", 00:15:57.115 "raid_level": "raid5f", 00:15:57.115 "superblock": true, 00:15:57.115 "num_base_bdevs": 3, 00:15:57.115 "num_base_bdevs_discovered": 1, 00:15:57.115 "num_base_bdevs_operational": 3, 00:15:57.115 "base_bdevs_list": [ 00:15:57.115 { 00:15:57.115 "name": null, 00:15:57.115 "uuid": "b012c87b-ddf0-45c3-8c75-b0478ff54735", 00:15:57.115 "is_configured": false, 00:15:57.115 "data_offset": 0, 00:15:57.115 "data_size": 63488 00:15:57.115 }, 00:15:57.115 { 00:15:57.115 "name": null, 00:15:57.115 "uuid": "e0f35c20-7c69-474b-a2ed-cb6f76a8a8d9", 00:15:57.115 "is_configured": false, 00:15:57.115 "data_offset": 0, 00:15:57.115 "data_size": 63488 00:15:57.115 }, 00:15:57.115 { 00:15:57.115 "name": "BaseBdev3", 00:15:57.115 "uuid": "0f23747a-3414-4322-bc0e-f03824152e48", 00:15:57.115 "is_configured": true, 00:15:57.115 "data_offset": 2048, 00:15:57.115 "data_size": 63488 00:15:57.115 } 00:15:57.115 ] 00:15:57.115 }' 00:15:57.115 03:18:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.115 03:18:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.374 03:18:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:57.374 03:18:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.374 03:18:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.374 03:18:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.374 03:18:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.633 03:18:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:15:57.633 03:18:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:57.633 03:18:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.633 03:18:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.633 [2024-10-09 03:18:40.685552] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:57.633 03:18:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.633 03:18:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:15:57.633 03:18:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:57.633 03:18:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:15:57.633 03:18:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:57.633 03:18:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:57.633 03:18:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:57.633 03:18:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:57.633 03:18:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:57.633 03:18:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:57.633 03:18:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:57.633 03:18:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.633 03:18:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:57.633 03:18:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.633 03:18:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.633 03:18:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.633 03:18:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:57.633 "name": "Existed_Raid", 00:15:57.633 "uuid": "4613de8e-d52b-47c5-9df4-2bc977cd36c5", 00:15:57.633 "strip_size_kb": 64, 00:15:57.633 "state": "configuring", 00:15:57.633 "raid_level": "raid5f", 00:15:57.633 "superblock": true, 00:15:57.633 "num_base_bdevs": 3, 00:15:57.633 "num_base_bdevs_discovered": 2, 00:15:57.633 "num_base_bdevs_operational": 3, 00:15:57.633 "base_bdevs_list": [ 00:15:57.633 { 00:15:57.633 "name": null, 00:15:57.633 "uuid": "b012c87b-ddf0-45c3-8c75-b0478ff54735", 00:15:57.633 "is_configured": false, 00:15:57.633 "data_offset": 0, 00:15:57.633 "data_size": 63488 00:15:57.633 }, 00:15:57.633 { 00:15:57.633 "name": "BaseBdev2", 00:15:57.633 "uuid": "e0f35c20-7c69-474b-a2ed-cb6f76a8a8d9", 00:15:57.633 "is_configured": true, 00:15:57.633 "data_offset": 2048, 00:15:57.633 "data_size": 63488 00:15:57.633 }, 00:15:57.633 { 00:15:57.633 "name": "BaseBdev3", 00:15:57.633 "uuid": "0f23747a-3414-4322-bc0e-f03824152e48", 00:15:57.633 "is_configured": true, 00:15:57.633 "data_offset": 2048, 00:15:57.633 "data_size": 63488 00:15:57.633 } 00:15:57.633 ] 00:15:57.633 }' 00:15:57.633 03:18:40 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:57.633 03:18:40 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.892 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:57.892 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:57.892 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.892 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:57.892 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u b012c87b-ddf0-45c3-8c75-b0478ff54735 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.153 [2024-10-09 03:18:41.297816] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:58.153 [2024-10-09 03:18:41.298215] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:15:58.153 [2024-10-09 03:18:41.298278] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:15:58.153 [2024-10-09 03:18:41.298605] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:58.153 NewBaseBdev 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.153 [2024-10-09 03:18:41.304829] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:15:58.153 [2024-10-09 03:18:41.304904] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:15:58.153 [2024-10-09 03:18:41.305135] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.153 [ 00:15:58.153 { 00:15:58.153 "name": "NewBaseBdev", 00:15:58.153 "aliases": [ 00:15:58.153 "b012c87b-ddf0-45c3-8c75-b0478ff54735" 00:15:58.153 ], 00:15:58.153 "product_name": "Malloc disk", 00:15:58.153 "block_size": 512, 00:15:58.153 "num_blocks": 65536, 00:15:58.153 "uuid": "b012c87b-ddf0-45c3-8c75-b0478ff54735", 00:15:58.153 "assigned_rate_limits": { 00:15:58.153 "rw_ios_per_sec": 0, 00:15:58.153 "rw_mbytes_per_sec": 0, 00:15:58.153 "r_mbytes_per_sec": 0, 00:15:58.153 "w_mbytes_per_sec": 0 00:15:58.153 }, 00:15:58.153 "claimed": true, 00:15:58.153 "claim_type": "exclusive_write", 00:15:58.153 "zoned": false, 00:15:58.153 "supported_io_types": { 00:15:58.153 "read": true, 00:15:58.153 "write": true, 00:15:58.153 "unmap": true, 00:15:58.153 "flush": true, 00:15:58.153 "reset": true, 00:15:58.153 "nvme_admin": false, 00:15:58.153 "nvme_io": false, 00:15:58.153 "nvme_io_md": false, 00:15:58.153 "write_zeroes": true, 00:15:58.153 "zcopy": true, 00:15:58.153 "get_zone_info": false, 00:15:58.153 "zone_management": false, 00:15:58.153 "zone_append": false, 00:15:58.153 "compare": false, 00:15:58.153 "compare_and_write": false, 00:15:58.153 "abort": true, 00:15:58.153 "seek_hole": false, 00:15:58.153 "seek_data": false, 00:15:58.153 "copy": true, 00:15:58.153 "nvme_iov_md": false 00:15:58.153 }, 00:15:58.153 "memory_domains": [ 00:15:58.153 { 00:15:58.153 "dma_device_id": "system", 00:15:58.153 "dma_device_type": 1 00:15:58.153 }, 00:15:58.153 { 00:15:58.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.153 "dma_device_type": 2 00:15:58.153 } 00:15:58.153 ], 00:15:58.153 "driver_specific": {} 00:15:58.153 } 00:15:58.153 ] 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:58.153 "name": "Existed_Raid", 00:15:58.153 "uuid": "4613de8e-d52b-47c5-9df4-2bc977cd36c5", 00:15:58.153 "strip_size_kb": 64, 00:15:58.153 "state": "online", 00:15:58.153 "raid_level": "raid5f", 00:15:58.153 "superblock": true, 00:15:58.153 "num_base_bdevs": 3, 00:15:58.153 "num_base_bdevs_discovered": 3, 00:15:58.153 "num_base_bdevs_operational": 3, 00:15:58.153 "base_bdevs_list": [ 00:15:58.153 { 00:15:58.153 "name": "NewBaseBdev", 00:15:58.153 "uuid": "b012c87b-ddf0-45c3-8c75-b0478ff54735", 00:15:58.153 "is_configured": true, 00:15:58.153 "data_offset": 2048, 00:15:58.153 "data_size": 63488 00:15:58.153 }, 00:15:58.153 { 00:15:58.153 "name": "BaseBdev2", 00:15:58.153 "uuid": "e0f35c20-7c69-474b-a2ed-cb6f76a8a8d9", 00:15:58.153 "is_configured": true, 00:15:58.153 "data_offset": 2048, 00:15:58.153 "data_size": 63488 00:15:58.153 }, 00:15:58.153 { 00:15:58.153 "name": "BaseBdev3", 00:15:58.153 "uuid": "0f23747a-3414-4322-bc0e-f03824152e48", 00:15:58.153 "is_configured": true, 00:15:58.153 "data_offset": 2048, 00:15:58.153 "data_size": 63488 00:15:58.153 } 00:15:58.153 ] 00:15:58.153 }' 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:58.153 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.723 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:15:58.723 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:15:58.723 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:15:58.723 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:15:58.723 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:15:58.723 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:15:58.723 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:15:58.723 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:15:58.723 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.723 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.723 [2024-10-09 03:18:41.780202] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:58.723 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.723 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:15:58.723 "name": "Existed_Raid", 00:15:58.723 "aliases": [ 00:15:58.723 "4613de8e-d52b-47c5-9df4-2bc977cd36c5" 00:15:58.723 ], 00:15:58.723 "product_name": "Raid Volume", 00:15:58.723 "block_size": 512, 00:15:58.723 "num_blocks": 126976, 00:15:58.723 "uuid": "4613de8e-d52b-47c5-9df4-2bc977cd36c5", 00:15:58.723 "assigned_rate_limits": { 00:15:58.723 "rw_ios_per_sec": 0, 00:15:58.723 "rw_mbytes_per_sec": 0, 00:15:58.723 "r_mbytes_per_sec": 0, 00:15:58.723 "w_mbytes_per_sec": 0 00:15:58.723 }, 00:15:58.723 "claimed": false, 00:15:58.723 "zoned": false, 00:15:58.723 "supported_io_types": { 00:15:58.723 "read": true, 00:15:58.723 "write": true, 00:15:58.723 "unmap": false, 00:15:58.723 "flush": false, 00:15:58.723 "reset": true, 00:15:58.723 "nvme_admin": false, 00:15:58.723 "nvme_io": false, 00:15:58.723 "nvme_io_md": false, 00:15:58.723 "write_zeroes": true, 00:15:58.723 "zcopy": false, 00:15:58.723 "get_zone_info": false, 00:15:58.723 "zone_management": false, 00:15:58.723 "zone_append": false, 00:15:58.723 "compare": false, 00:15:58.723 "compare_and_write": false, 00:15:58.723 "abort": false, 00:15:58.723 "seek_hole": false, 00:15:58.723 "seek_data": false, 00:15:58.723 "copy": false, 00:15:58.723 "nvme_iov_md": false 00:15:58.723 }, 00:15:58.723 "driver_specific": { 00:15:58.723 "raid": { 00:15:58.723 "uuid": "4613de8e-d52b-47c5-9df4-2bc977cd36c5", 00:15:58.723 "strip_size_kb": 64, 00:15:58.723 "state": "online", 00:15:58.723 "raid_level": "raid5f", 00:15:58.723 "superblock": true, 00:15:58.723 "num_base_bdevs": 3, 00:15:58.723 "num_base_bdevs_discovered": 3, 00:15:58.723 "num_base_bdevs_operational": 3, 00:15:58.723 "base_bdevs_list": [ 00:15:58.723 { 00:15:58.723 "name": "NewBaseBdev", 00:15:58.723 "uuid": "b012c87b-ddf0-45c3-8c75-b0478ff54735", 00:15:58.723 "is_configured": true, 00:15:58.723 "data_offset": 2048, 00:15:58.723 "data_size": 63488 00:15:58.723 }, 00:15:58.723 { 00:15:58.723 "name": "BaseBdev2", 00:15:58.723 "uuid": "e0f35c20-7c69-474b-a2ed-cb6f76a8a8d9", 00:15:58.723 "is_configured": true, 00:15:58.723 "data_offset": 2048, 00:15:58.723 "data_size": 63488 00:15:58.723 }, 00:15:58.723 { 00:15:58.723 "name": "BaseBdev3", 00:15:58.723 "uuid": "0f23747a-3414-4322-bc0e-f03824152e48", 00:15:58.723 "is_configured": true, 00:15:58.723 "data_offset": 2048, 00:15:58.723 "data_size": 63488 00:15:58.723 } 00:15:58.723 ] 00:15:58.723 } 00:15:58.724 } 00:15:58.724 }' 00:15:58.724 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:58.724 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:15:58.724 BaseBdev2 00:15:58.724 BaseBdev3' 00:15:58.724 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.724 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:15:58.724 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.724 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:15:58.724 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.724 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.724 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.724 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.724 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.724 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.724 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.724 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:15:58.724 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.724 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.724 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.724 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.724 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.724 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.724 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:15:58.724 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:15:58.724 03:18:41 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:15:58.724 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.724 03:18:41 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.724 03:18:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.983 03:18:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:15:58.983 03:18:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:15:58.983 03:18:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:15:58.983 03:18:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.983 03:18:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:58.983 [2024-10-09 03:18:42.035442] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:58.983 [2024-10-09 03:18:42.035567] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:58.983 [2024-10-09 03:18:42.035694] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:58.983 [2024-10-09 03:18:42.036038] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:58.983 [2024-10-09 03:18:42.036100] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:15:58.983 03:18:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.983 03:18:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80779 00:15:58.983 03:18:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 80779 ']' 00:15:58.983 03:18:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 80779 00:15:58.983 03:18:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:15:58.983 03:18:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:58.983 03:18:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 80779 00:15:58.983 killing process with pid 80779 00:15:58.983 03:18:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:58.983 03:18:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:58.983 03:18:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 80779' 00:15:58.984 03:18:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 80779 00:15:58.984 [2024-10-09 03:18:42.082581] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:58.984 03:18:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 80779 00:15:59.242 [2024-10-09 03:18:42.414651] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:00.629 ************************************ 00:16:00.629 END TEST raid5f_state_function_test_sb 00:16:00.629 ************************************ 00:16:00.629 03:18:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:16:00.629 00:16:00.629 real 0m11.149s 00:16:00.629 user 0m17.366s 00:16:00.629 sys 0m2.071s 00:16:00.629 03:18:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:00.629 03:18:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.629 03:18:43 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:16:00.629 03:18:43 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:16:00.629 03:18:43 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:00.629 03:18:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:00.889 ************************************ 00:16:00.889 START TEST raid5f_superblock_test 00:16:00.889 ************************************ 00:16:00.889 03:18:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 3 00:16:00.889 03:18:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:16:00.889 03:18:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:16:00.889 03:18:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:16:00.889 03:18:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:16:00.889 03:18:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:16:00.889 03:18:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:16:00.889 03:18:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:16:00.889 03:18:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:16:00.889 03:18:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:16:00.889 03:18:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:16:00.889 03:18:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:16:00.889 03:18:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:16:00.889 03:18:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:16:00.889 03:18:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:16:00.889 03:18:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:16:00.889 03:18:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:16:00.889 03:18:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81405 00:16:00.889 03:18:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:16:00.889 03:18:43 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81405 00:16:00.889 03:18:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 81405 ']' 00:16:00.889 03:18:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.889 03:18:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:00.889 03:18:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.889 03:18:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:00.889 03:18:43 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:00.889 [2024-10-09 03:18:44.023163] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:16:00.889 [2024-10-09 03:18:44.023359] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81405 ] 00:16:00.889 [2024-10-09 03:18:44.188370] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:01.457 [2024-10-09 03:18:44.468048] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:01.457 [2024-10-09 03:18:44.749013] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:01.457 [2024-10-09 03:18:44.749071] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:01.717 03:18:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:01.717 03:18:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:16:01.717 03:18:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:16:01.717 03:18:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:01.717 03:18:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:16:01.717 03:18:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:16:01.717 03:18:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:01.717 03:18:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:01.717 03:18:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:01.717 03:18:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:01.717 03:18:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:16:01.717 03:18:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.717 03:18:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.717 malloc1 00:16:01.717 03:18:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.717 03:18:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:01.717 03:18:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.717 03:18:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.717 [2024-10-09 03:18:44.990359] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:01.717 [2024-10-09 03:18:44.990542] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.717 [2024-10-09 03:18:44.990594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:01.717 [2024-10-09 03:18:44.990644] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.717 [2024-10-09 03:18:44.993295] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.717 [2024-10-09 03:18:44.993378] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:01.717 pt1 00:16:01.717 03:18:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.717 03:18:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:01.717 03:18:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:01.717 03:18:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:16:01.717 03:18:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:16:01.717 03:18:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:01.717 03:18:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:01.717 03:18:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:01.717 03:18:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:01.717 03:18:44 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:16:01.717 03:18:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.717 03:18:44 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.977 malloc2 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.977 [2024-10-09 03:18:45.073933] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:01.977 [2024-10-09 03:18:45.073993] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.977 [2024-10-09 03:18:45.074023] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:01.977 [2024-10-09 03:18:45.074034] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.977 [2024-10-09 03:18:45.076599] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.977 [2024-10-09 03:18:45.076696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:01.977 pt2 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.977 malloc3 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.977 [2024-10-09 03:18:45.141226] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:01.977 [2024-10-09 03:18:45.141335] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:01.977 [2024-10-09 03:18:45.141380] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:01.977 [2024-10-09 03:18:45.141414] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:01.977 [2024-10-09 03:18:45.143954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:01.977 [2024-10-09 03:18:45.144030] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:01.977 pt3 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.977 [2024-10-09 03:18:45.153290] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:01.977 [2024-10-09 03:18:45.155578] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:01.977 [2024-10-09 03:18:45.155704] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:01.977 [2024-10-09 03:18:45.155938] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:01.977 [2024-10-09 03:18:45.156000] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:01.977 [2024-10-09 03:18:45.156271] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:01.977 [2024-10-09 03:18:45.162674] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:01.977 [2024-10-09 03:18:45.162733] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:01.977 [2024-10-09 03:18:45.162997] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.977 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:01.977 "name": "raid_bdev1", 00:16:01.977 "uuid": "f407df48-88b7-482c-a257-ea88ccfea44a", 00:16:01.977 "strip_size_kb": 64, 00:16:01.977 "state": "online", 00:16:01.977 "raid_level": "raid5f", 00:16:01.977 "superblock": true, 00:16:01.977 "num_base_bdevs": 3, 00:16:01.977 "num_base_bdevs_discovered": 3, 00:16:01.977 "num_base_bdevs_operational": 3, 00:16:01.977 "base_bdevs_list": [ 00:16:01.977 { 00:16:01.977 "name": "pt1", 00:16:01.977 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:01.977 "is_configured": true, 00:16:01.977 "data_offset": 2048, 00:16:01.977 "data_size": 63488 00:16:01.977 }, 00:16:01.977 { 00:16:01.977 "name": "pt2", 00:16:01.977 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:01.977 "is_configured": true, 00:16:01.977 "data_offset": 2048, 00:16:01.977 "data_size": 63488 00:16:01.977 }, 00:16:01.977 { 00:16:01.978 "name": "pt3", 00:16:01.978 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:01.978 "is_configured": true, 00:16:01.978 "data_offset": 2048, 00:16:01.978 "data_size": 63488 00:16:01.978 } 00:16:01.978 ] 00:16:01.978 }' 00:16:01.978 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:01.978 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.547 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:16:02.547 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:02.547 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:02.547 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:02.547 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:02.547 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:02.547 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:02.547 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.547 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.547 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:02.547 [2024-10-09 03:18:45.622895] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:02.547 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.547 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:02.547 "name": "raid_bdev1", 00:16:02.547 "aliases": [ 00:16:02.547 "f407df48-88b7-482c-a257-ea88ccfea44a" 00:16:02.547 ], 00:16:02.547 "product_name": "Raid Volume", 00:16:02.547 "block_size": 512, 00:16:02.547 "num_blocks": 126976, 00:16:02.547 "uuid": "f407df48-88b7-482c-a257-ea88ccfea44a", 00:16:02.547 "assigned_rate_limits": { 00:16:02.547 "rw_ios_per_sec": 0, 00:16:02.547 "rw_mbytes_per_sec": 0, 00:16:02.547 "r_mbytes_per_sec": 0, 00:16:02.547 "w_mbytes_per_sec": 0 00:16:02.547 }, 00:16:02.547 "claimed": false, 00:16:02.547 "zoned": false, 00:16:02.547 "supported_io_types": { 00:16:02.547 "read": true, 00:16:02.547 "write": true, 00:16:02.547 "unmap": false, 00:16:02.547 "flush": false, 00:16:02.547 "reset": true, 00:16:02.547 "nvme_admin": false, 00:16:02.547 "nvme_io": false, 00:16:02.547 "nvme_io_md": false, 00:16:02.547 "write_zeroes": true, 00:16:02.547 "zcopy": false, 00:16:02.547 "get_zone_info": false, 00:16:02.547 "zone_management": false, 00:16:02.547 "zone_append": false, 00:16:02.547 "compare": false, 00:16:02.547 "compare_and_write": false, 00:16:02.547 "abort": false, 00:16:02.547 "seek_hole": false, 00:16:02.547 "seek_data": false, 00:16:02.547 "copy": false, 00:16:02.547 "nvme_iov_md": false 00:16:02.547 }, 00:16:02.547 "driver_specific": { 00:16:02.547 "raid": { 00:16:02.547 "uuid": "f407df48-88b7-482c-a257-ea88ccfea44a", 00:16:02.547 "strip_size_kb": 64, 00:16:02.547 "state": "online", 00:16:02.547 "raid_level": "raid5f", 00:16:02.547 "superblock": true, 00:16:02.547 "num_base_bdevs": 3, 00:16:02.547 "num_base_bdevs_discovered": 3, 00:16:02.547 "num_base_bdevs_operational": 3, 00:16:02.547 "base_bdevs_list": [ 00:16:02.547 { 00:16:02.547 "name": "pt1", 00:16:02.547 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:02.547 "is_configured": true, 00:16:02.547 "data_offset": 2048, 00:16:02.547 "data_size": 63488 00:16:02.547 }, 00:16:02.547 { 00:16:02.547 "name": "pt2", 00:16:02.547 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:02.547 "is_configured": true, 00:16:02.547 "data_offset": 2048, 00:16:02.547 "data_size": 63488 00:16:02.547 }, 00:16:02.547 { 00:16:02.547 "name": "pt3", 00:16:02.547 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:02.547 "is_configured": true, 00:16:02.547 "data_offset": 2048, 00:16:02.547 "data_size": 63488 00:16:02.547 } 00:16:02.547 ] 00:16:02.547 } 00:16:02.547 } 00:16:02.547 }' 00:16:02.547 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:02.547 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:02.547 pt2 00:16:02.547 pt3' 00:16:02.547 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:02.547 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:02.547 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:02.547 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:02.547 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:02.547 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.547 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.547 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.547 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:02.547 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:02.547 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:02.547 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:02.547 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:02.547 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.547 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.547 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.547 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:02.547 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:02.547 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:02.806 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:02.806 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.806 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:02.806 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.806 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.806 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:02.806 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:02.806 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:16:02.806 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:02.806 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.806 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.806 [2024-10-09 03:18:45.906242] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:02.806 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.806 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f407df48-88b7-482c-a257-ea88ccfea44a 00:16:02.806 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z f407df48-88b7-482c-a257-ea88ccfea44a ']' 00:16:02.806 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:02.806 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.806 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.806 [2024-10-09 03:18:45.949968] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:02.806 [2024-10-09 03:18:45.949999] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:02.806 [2024-10-09 03:18:45.950087] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:02.806 [2024-10-09 03:18:45.950177] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:02.806 [2024-10-09 03:18:45.950189] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:02.806 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.806 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.806 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.806 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.806 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:16:02.806 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.806 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:16:02.806 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:16:02.806 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:02.806 03:18:45 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:16:02.806 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.806 03:18:45 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.806 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.806 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:02.806 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:16:02.806 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.806 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.806 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.806 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:16:02.806 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:16:02.806 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.806 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.806 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.806 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:16:02.806 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.806 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.806 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:02.806 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:02.806 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:16:02.806 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:02.806 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:16:02.806 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:02.806 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:02.806 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:02.806 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:02.806 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:02.806 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:16:02.806 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.806 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.806 [2024-10-09 03:18:46.090006] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:02.806 [2024-10-09 03:18:46.092295] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:02.806 [2024-10-09 03:18:46.092355] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:02.806 [2024-10-09 03:18:46.092414] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:02.806 [2024-10-09 03:18:46.092465] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:02.806 [2024-10-09 03:18:46.092486] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:02.806 [2024-10-09 03:18:46.092504] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:02.806 [2024-10-09 03:18:46.092517] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:16:02.806 request: 00:16:02.806 { 00:16:02.806 "name": "raid_bdev1", 00:16:02.806 "raid_level": "raid5f", 00:16:02.806 "base_bdevs": [ 00:16:02.806 "malloc1", 00:16:02.806 "malloc2", 00:16:02.806 "malloc3" 00:16:02.806 ], 00:16:02.806 "strip_size_kb": 64, 00:16:02.806 "superblock": false, 00:16:02.806 "method": "bdev_raid_create", 00:16:02.806 "req_id": 1 00:16:02.806 } 00:16:02.806 Got JSON-RPC error response 00:16:02.806 response: 00:16:02.806 { 00:16:02.806 "code": -17, 00:16:02.806 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:02.806 } 00:16:02.806 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:02.806 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:16:02.806 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:02.806 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:02.806 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:02.806 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.806 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:02.806 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:02.806 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:16:03.066 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.066 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:16:03.066 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:16:03.066 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:03.066 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.066 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.066 [2024-10-09 03:18:46.149980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:03.066 [2024-10-09 03:18:46.150076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.066 [2024-10-09 03:18:46.150125] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:03.066 [2024-10-09 03:18:46.150163] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.067 [2024-10-09 03:18:46.152815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.067 [2024-10-09 03:18:46.152907] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:03.067 [2024-10-09 03:18:46.153011] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:03.067 [2024-10-09 03:18:46.153093] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:03.067 pt1 00:16:03.067 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.067 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:03.067 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.067 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:03.067 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:03.067 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.067 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:03.067 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.067 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.067 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.067 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.067 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.067 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.067 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.067 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.067 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.067 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.067 "name": "raid_bdev1", 00:16:03.067 "uuid": "f407df48-88b7-482c-a257-ea88ccfea44a", 00:16:03.067 "strip_size_kb": 64, 00:16:03.067 "state": "configuring", 00:16:03.067 "raid_level": "raid5f", 00:16:03.067 "superblock": true, 00:16:03.067 "num_base_bdevs": 3, 00:16:03.067 "num_base_bdevs_discovered": 1, 00:16:03.067 "num_base_bdevs_operational": 3, 00:16:03.067 "base_bdevs_list": [ 00:16:03.067 { 00:16:03.067 "name": "pt1", 00:16:03.067 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:03.067 "is_configured": true, 00:16:03.067 "data_offset": 2048, 00:16:03.067 "data_size": 63488 00:16:03.067 }, 00:16:03.067 { 00:16:03.067 "name": null, 00:16:03.067 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:03.067 "is_configured": false, 00:16:03.067 "data_offset": 2048, 00:16:03.067 "data_size": 63488 00:16:03.067 }, 00:16:03.067 { 00:16:03.067 "name": null, 00:16:03.067 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:03.067 "is_configured": false, 00:16:03.067 "data_offset": 2048, 00:16:03.067 "data_size": 63488 00:16:03.067 } 00:16:03.067 ] 00:16:03.067 }' 00:16:03.067 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.067 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.327 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:16:03.327 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:03.327 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.327 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.327 [2024-10-09 03:18:46.601501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:03.327 [2024-10-09 03:18:46.601645] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.327 [2024-10-09 03:18:46.601691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:03.327 [2024-10-09 03:18:46.601720] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.327 [2024-10-09 03:18:46.602283] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.327 [2024-10-09 03:18:46.602351] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:03.327 [2024-10-09 03:18:46.602490] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:03.327 [2024-10-09 03:18:46.602548] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:03.327 pt2 00:16:03.327 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.327 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:16:03.327 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.327 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.327 [2024-10-09 03:18:46.613455] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:03.327 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.327 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:16:03.327 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.327 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:03.327 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:03.327 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.327 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:03.327 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.327 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.327 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.327 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.327 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.327 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.327 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.327 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.587 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.587 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.587 "name": "raid_bdev1", 00:16:03.587 "uuid": "f407df48-88b7-482c-a257-ea88ccfea44a", 00:16:03.587 "strip_size_kb": 64, 00:16:03.587 "state": "configuring", 00:16:03.587 "raid_level": "raid5f", 00:16:03.587 "superblock": true, 00:16:03.587 "num_base_bdevs": 3, 00:16:03.587 "num_base_bdevs_discovered": 1, 00:16:03.587 "num_base_bdevs_operational": 3, 00:16:03.587 "base_bdevs_list": [ 00:16:03.587 { 00:16:03.587 "name": "pt1", 00:16:03.587 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:03.587 "is_configured": true, 00:16:03.587 "data_offset": 2048, 00:16:03.587 "data_size": 63488 00:16:03.587 }, 00:16:03.587 { 00:16:03.587 "name": null, 00:16:03.587 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:03.587 "is_configured": false, 00:16:03.587 "data_offset": 0, 00:16:03.587 "data_size": 63488 00:16:03.587 }, 00:16:03.587 { 00:16:03.587 "name": null, 00:16:03.587 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:03.587 "is_configured": false, 00:16:03.587 "data_offset": 2048, 00:16:03.587 "data_size": 63488 00:16:03.587 } 00:16:03.587 ] 00:16:03.587 }' 00:16:03.587 03:18:46 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.587 03:18:46 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.847 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:16:03.847 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:03.847 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:03.847 03:18:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.847 03:18:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.847 [2024-10-09 03:18:47.064725] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:03.847 [2024-10-09 03:18:47.064876] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.847 [2024-10-09 03:18:47.064916] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:16:03.847 [2024-10-09 03:18:47.064948] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.847 [2024-10-09 03:18:47.065457] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.847 [2024-10-09 03:18:47.065533] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:03.847 [2024-10-09 03:18:47.065655] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:03.847 [2024-10-09 03:18:47.065710] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:03.847 pt2 00:16:03.847 03:18:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.847 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:03.847 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:03.847 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:03.847 03:18:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.847 03:18:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.847 [2024-10-09 03:18:47.076709] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:03.847 [2024-10-09 03:18:47.076803] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:03.847 [2024-10-09 03:18:47.076834] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:16:03.847 [2024-10-09 03:18:47.076873] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:03.847 [2024-10-09 03:18:47.077284] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:03.847 [2024-10-09 03:18:47.077353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:03.847 [2024-10-09 03:18:47.077440] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:03.847 [2024-10-09 03:18:47.077488] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:03.847 [2024-10-09 03:18:47.077626] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:03.847 [2024-10-09 03:18:47.077666] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:03.847 [2024-10-09 03:18:47.077955] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:03.847 [2024-10-09 03:18:47.082882] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:03.847 [2024-10-09 03:18:47.082935] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:16:03.847 [2024-10-09 03:18:47.083153] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:03.847 pt3 00:16:03.847 03:18:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.847 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:16:03.847 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:16:03.847 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:03.847 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:03.847 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:03.847 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:03.847 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:03.847 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:03.847 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:03.847 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:03.847 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:03.847 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:03.847 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:03.847 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:03.847 03:18:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.847 03:18:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:03.847 03:18:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.847 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:03.847 "name": "raid_bdev1", 00:16:03.847 "uuid": "f407df48-88b7-482c-a257-ea88ccfea44a", 00:16:03.847 "strip_size_kb": 64, 00:16:03.847 "state": "online", 00:16:03.847 "raid_level": "raid5f", 00:16:03.847 "superblock": true, 00:16:03.847 "num_base_bdevs": 3, 00:16:03.847 "num_base_bdevs_discovered": 3, 00:16:03.847 "num_base_bdevs_operational": 3, 00:16:03.847 "base_bdevs_list": [ 00:16:03.847 { 00:16:03.847 "name": "pt1", 00:16:03.847 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:03.847 "is_configured": true, 00:16:03.847 "data_offset": 2048, 00:16:03.847 "data_size": 63488 00:16:03.847 }, 00:16:03.847 { 00:16:03.847 "name": "pt2", 00:16:03.847 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:03.847 "is_configured": true, 00:16:03.847 "data_offset": 2048, 00:16:03.847 "data_size": 63488 00:16:03.847 }, 00:16:03.847 { 00:16:03.847 "name": "pt3", 00:16:03.847 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:03.847 "is_configured": true, 00:16:03.847 "data_offset": 2048, 00:16:03.847 "data_size": 63488 00:16:03.847 } 00:16:03.847 ] 00:16:03.847 }' 00:16:03.847 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:03.847 03:18:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.417 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:16:04.417 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:16:04.417 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:04.417 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:04.417 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:04.417 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:04.417 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:04.417 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:04.417 03:18:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.417 03:18:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.417 [2024-10-09 03:18:47.513853] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:04.417 03:18:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.417 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:04.417 "name": "raid_bdev1", 00:16:04.417 "aliases": [ 00:16:04.417 "f407df48-88b7-482c-a257-ea88ccfea44a" 00:16:04.417 ], 00:16:04.417 "product_name": "Raid Volume", 00:16:04.417 "block_size": 512, 00:16:04.417 "num_blocks": 126976, 00:16:04.417 "uuid": "f407df48-88b7-482c-a257-ea88ccfea44a", 00:16:04.417 "assigned_rate_limits": { 00:16:04.417 "rw_ios_per_sec": 0, 00:16:04.417 "rw_mbytes_per_sec": 0, 00:16:04.417 "r_mbytes_per_sec": 0, 00:16:04.417 "w_mbytes_per_sec": 0 00:16:04.417 }, 00:16:04.417 "claimed": false, 00:16:04.417 "zoned": false, 00:16:04.417 "supported_io_types": { 00:16:04.417 "read": true, 00:16:04.417 "write": true, 00:16:04.417 "unmap": false, 00:16:04.417 "flush": false, 00:16:04.417 "reset": true, 00:16:04.417 "nvme_admin": false, 00:16:04.417 "nvme_io": false, 00:16:04.417 "nvme_io_md": false, 00:16:04.417 "write_zeroes": true, 00:16:04.417 "zcopy": false, 00:16:04.417 "get_zone_info": false, 00:16:04.417 "zone_management": false, 00:16:04.417 "zone_append": false, 00:16:04.417 "compare": false, 00:16:04.417 "compare_and_write": false, 00:16:04.417 "abort": false, 00:16:04.417 "seek_hole": false, 00:16:04.417 "seek_data": false, 00:16:04.417 "copy": false, 00:16:04.417 "nvme_iov_md": false 00:16:04.417 }, 00:16:04.417 "driver_specific": { 00:16:04.417 "raid": { 00:16:04.417 "uuid": "f407df48-88b7-482c-a257-ea88ccfea44a", 00:16:04.417 "strip_size_kb": 64, 00:16:04.417 "state": "online", 00:16:04.417 "raid_level": "raid5f", 00:16:04.417 "superblock": true, 00:16:04.417 "num_base_bdevs": 3, 00:16:04.417 "num_base_bdevs_discovered": 3, 00:16:04.417 "num_base_bdevs_operational": 3, 00:16:04.417 "base_bdevs_list": [ 00:16:04.417 { 00:16:04.417 "name": "pt1", 00:16:04.417 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:04.417 "is_configured": true, 00:16:04.417 "data_offset": 2048, 00:16:04.417 "data_size": 63488 00:16:04.417 }, 00:16:04.417 { 00:16:04.417 "name": "pt2", 00:16:04.417 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:04.417 "is_configured": true, 00:16:04.417 "data_offset": 2048, 00:16:04.417 "data_size": 63488 00:16:04.417 }, 00:16:04.417 { 00:16:04.417 "name": "pt3", 00:16:04.417 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:04.417 "is_configured": true, 00:16:04.417 "data_offset": 2048, 00:16:04.417 "data_size": 63488 00:16:04.417 } 00:16:04.417 ] 00:16:04.417 } 00:16:04.417 } 00:16:04.417 }' 00:16:04.418 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:04.418 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:16:04.418 pt2 00:16:04.418 pt3' 00:16:04.418 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.418 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:04.418 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.418 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.418 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:16:04.418 03:18:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.418 03:18:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.418 03:18:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.418 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:04.418 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:04.418 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.418 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:16:04.418 03:18:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.418 03:18:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.418 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.418 03:18:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.418 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:04.418 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:04.418 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:04.677 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:04.677 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:16:04.678 03:18:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.678 03:18:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.678 03:18:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.678 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:04.678 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:04.678 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:04.678 03:18:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.678 03:18:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.678 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:16:04.678 [2024-10-09 03:18:47.777308] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:04.678 03:18:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.678 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' f407df48-88b7-482c-a257-ea88ccfea44a '!=' f407df48-88b7-482c-a257-ea88ccfea44a ']' 00:16:04.678 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:16:04.678 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:04.678 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:04.678 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:16:04.678 03:18:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.678 03:18:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.678 [2024-10-09 03:18:47.805177] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:04.678 03:18:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.678 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:04.678 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.678 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.678 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:04.678 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:04.678 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:04.678 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.678 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.678 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.678 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.678 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.678 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.678 03:18:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.678 03:18:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:04.678 03:18:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.678 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.678 "name": "raid_bdev1", 00:16:04.678 "uuid": "f407df48-88b7-482c-a257-ea88ccfea44a", 00:16:04.678 "strip_size_kb": 64, 00:16:04.678 "state": "online", 00:16:04.678 "raid_level": "raid5f", 00:16:04.678 "superblock": true, 00:16:04.678 "num_base_bdevs": 3, 00:16:04.678 "num_base_bdevs_discovered": 2, 00:16:04.678 "num_base_bdevs_operational": 2, 00:16:04.678 "base_bdevs_list": [ 00:16:04.678 { 00:16:04.678 "name": null, 00:16:04.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.678 "is_configured": false, 00:16:04.678 "data_offset": 0, 00:16:04.678 "data_size": 63488 00:16:04.678 }, 00:16:04.678 { 00:16:04.678 "name": "pt2", 00:16:04.678 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:04.678 "is_configured": true, 00:16:04.678 "data_offset": 2048, 00:16:04.678 "data_size": 63488 00:16:04.678 }, 00:16:04.678 { 00:16:04.678 "name": "pt3", 00:16:04.678 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:04.678 "is_configured": true, 00:16:04.678 "data_offset": 2048, 00:16:04.678 "data_size": 63488 00:16:04.678 } 00:16:04.678 ] 00:16:04.678 }' 00:16:04.678 03:18:47 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.678 03:18:47 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.248 [2024-10-09 03:18:48.252395] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:05.248 [2024-10-09 03:18:48.252494] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:05.248 [2024-10-09 03:18:48.252588] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.248 [2024-10-09 03:18:48.252661] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:05.248 [2024-10-09 03:18:48.252712] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.248 [2024-10-09 03:18:48.336217] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:05.248 [2024-10-09 03:18:48.336273] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.248 [2024-10-09 03:18:48.336289] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:16:05.248 [2024-10-09 03:18:48.336300] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.248 [2024-10-09 03:18:48.338763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.248 [2024-10-09 03:18:48.338804] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:05.248 [2024-10-09 03:18:48.338897] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:05.248 [2024-10-09 03:18:48.338956] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:05.248 pt2 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.248 "name": "raid_bdev1", 00:16:05.248 "uuid": "f407df48-88b7-482c-a257-ea88ccfea44a", 00:16:05.248 "strip_size_kb": 64, 00:16:05.248 "state": "configuring", 00:16:05.248 "raid_level": "raid5f", 00:16:05.248 "superblock": true, 00:16:05.248 "num_base_bdevs": 3, 00:16:05.248 "num_base_bdevs_discovered": 1, 00:16:05.248 "num_base_bdevs_operational": 2, 00:16:05.248 "base_bdevs_list": [ 00:16:05.248 { 00:16:05.248 "name": null, 00:16:05.248 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.248 "is_configured": false, 00:16:05.248 "data_offset": 2048, 00:16:05.248 "data_size": 63488 00:16:05.248 }, 00:16:05.248 { 00:16:05.248 "name": "pt2", 00:16:05.248 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:05.248 "is_configured": true, 00:16:05.248 "data_offset": 2048, 00:16:05.248 "data_size": 63488 00:16:05.248 }, 00:16:05.248 { 00:16:05.248 "name": null, 00:16:05.248 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:05.248 "is_configured": false, 00:16:05.248 "data_offset": 2048, 00:16:05.248 "data_size": 63488 00:16:05.248 } 00:16:05.248 ] 00:16:05.248 }' 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.248 03:18:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.509 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:16:05.509 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:16:05.509 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:16:05.509 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:05.509 03:18:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.509 03:18:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.509 [2024-10-09 03:18:48.779586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:05.509 [2024-10-09 03:18:48.779770] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:05.509 [2024-10-09 03:18:48.779819] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:16:05.509 [2024-10-09 03:18:48.779879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:05.509 [2024-10-09 03:18:48.780515] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:05.509 [2024-10-09 03:18:48.780608] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:05.509 [2024-10-09 03:18:48.780763] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:05.509 [2024-10-09 03:18:48.780860] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:05.509 [2024-10-09 03:18:48.781048] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:05.509 [2024-10-09 03:18:48.781094] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:05.509 [2024-10-09 03:18:48.781405] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:05.509 [2024-10-09 03:18:48.787198] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:05.509 [2024-10-09 03:18:48.787258] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:16:05.509 [2024-10-09 03:18:48.787658] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:05.509 pt3 00:16:05.509 03:18:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.509 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:05.509 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:05.509 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:05.509 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:05.509 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:05.509 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:05.509 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:05.509 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:05.509 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:05.509 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:05.509 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:05.509 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.509 03:18:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.509 03:18:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:05.769 03:18:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.769 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:05.769 "name": "raid_bdev1", 00:16:05.769 "uuid": "f407df48-88b7-482c-a257-ea88ccfea44a", 00:16:05.769 "strip_size_kb": 64, 00:16:05.769 "state": "online", 00:16:05.769 "raid_level": "raid5f", 00:16:05.769 "superblock": true, 00:16:05.769 "num_base_bdevs": 3, 00:16:05.769 "num_base_bdevs_discovered": 2, 00:16:05.769 "num_base_bdevs_operational": 2, 00:16:05.769 "base_bdevs_list": [ 00:16:05.769 { 00:16:05.769 "name": null, 00:16:05.769 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:05.769 "is_configured": false, 00:16:05.769 "data_offset": 2048, 00:16:05.769 "data_size": 63488 00:16:05.769 }, 00:16:05.769 { 00:16:05.769 "name": "pt2", 00:16:05.769 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:05.769 "is_configured": true, 00:16:05.769 "data_offset": 2048, 00:16:05.769 "data_size": 63488 00:16:05.769 }, 00:16:05.769 { 00:16:05.769 "name": "pt3", 00:16:05.769 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:05.769 "is_configured": true, 00:16:05.769 "data_offset": 2048, 00:16:05.769 "data_size": 63488 00:16:05.769 } 00:16:05.769 ] 00:16:05.769 }' 00:16:05.769 03:18:48 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:05.769 03:18:48 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.029 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:06.029 03:18:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.029 03:18:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.029 [2024-10-09 03:18:49.202657] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:06.029 [2024-10-09 03:18:49.202783] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:06.029 [2024-10-09 03:18:49.202910] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:06.029 [2024-10-09 03:18:49.203014] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:06.029 [2024-10-09 03:18:49.203060] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:16:06.029 03:18:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.029 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:16:06.029 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.029 03:18:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.029 03:18:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.029 03:18:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.029 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:16:06.029 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:16:06.029 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:16:06.029 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:16:06.029 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:16:06.029 03:18:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.029 03:18:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.029 03:18:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.029 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:06.029 03:18:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.029 03:18:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.029 [2024-10-09 03:18:49.258553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:06.029 [2024-10-09 03:18:49.258665] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.029 [2024-10-09 03:18:49.258691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:06.029 [2024-10-09 03:18:49.258700] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.029 [2024-10-09 03:18:49.261322] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.029 [2024-10-09 03:18:49.261400] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:06.029 [2024-10-09 03:18:49.261498] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:06.029 [2024-10-09 03:18:49.261549] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:06.029 [2024-10-09 03:18:49.261693] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:06.029 [2024-10-09 03:18:49.261706] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:06.029 [2024-10-09 03:18:49.261722] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:16:06.029 [2024-10-09 03:18:49.261785] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:06.029 pt1 00:16:06.029 03:18:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.029 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:16:06.029 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:16:06.029 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.029 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:06.029 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.029 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.029 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:06.029 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.029 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.029 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.029 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.029 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.029 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.029 03:18:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.029 03:18:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.029 03:18:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.030 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.030 "name": "raid_bdev1", 00:16:06.030 "uuid": "f407df48-88b7-482c-a257-ea88ccfea44a", 00:16:06.030 "strip_size_kb": 64, 00:16:06.030 "state": "configuring", 00:16:06.030 "raid_level": "raid5f", 00:16:06.030 "superblock": true, 00:16:06.030 "num_base_bdevs": 3, 00:16:06.030 "num_base_bdevs_discovered": 1, 00:16:06.030 "num_base_bdevs_operational": 2, 00:16:06.030 "base_bdevs_list": [ 00:16:06.030 { 00:16:06.030 "name": null, 00:16:06.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.030 "is_configured": false, 00:16:06.030 "data_offset": 2048, 00:16:06.030 "data_size": 63488 00:16:06.030 }, 00:16:06.030 { 00:16:06.030 "name": "pt2", 00:16:06.030 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:06.030 "is_configured": true, 00:16:06.030 "data_offset": 2048, 00:16:06.030 "data_size": 63488 00:16:06.030 }, 00:16:06.030 { 00:16:06.030 "name": null, 00:16:06.030 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:06.030 "is_configured": false, 00:16:06.030 "data_offset": 2048, 00:16:06.030 "data_size": 63488 00:16:06.030 } 00:16:06.030 ] 00:16:06.030 }' 00:16:06.030 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.030 03:18:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.601 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:16:06.601 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:06.601 03:18:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.601 03:18:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.601 03:18:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.602 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:16:06.602 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:06.602 03:18:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.602 03:18:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.602 [2024-10-09 03:18:49.769664] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:06.602 [2024-10-09 03:18:49.769789] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.602 [2024-10-09 03:18:49.769828] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:16:06.602 [2024-10-09 03:18:49.769865] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.602 [2024-10-09 03:18:49.770404] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.602 [2024-10-09 03:18:49.770466] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:06.602 [2024-10-09 03:18:49.770581] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:06.602 [2024-10-09 03:18:49.770634] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:06.602 [2024-10-09 03:18:49.770792] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:16:06.602 [2024-10-09 03:18:49.770827] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:06.602 [2024-10-09 03:18:49.771162] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:06.602 [2024-10-09 03:18:49.776923] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:16:06.602 [2024-10-09 03:18:49.776984] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:16:06.602 [2024-10-09 03:18:49.777268] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.602 pt3 00:16:06.602 03:18:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.602 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:06.602 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.602 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.602 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:06.602 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:06.602 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:06.602 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.602 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.602 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.602 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.602 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.602 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.602 03:18:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.602 03:18:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:06.602 03:18:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.602 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.602 "name": "raid_bdev1", 00:16:06.602 "uuid": "f407df48-88b7-482c-a257-ea88ccfea44a", 00:16:06.602 "strip_size_kb": 64, 00:16:06.602 "state": "online", 00:16:06.602 "raid_level": "raid5f", 00:16:06.602 "superblock": true, 00:16:06.602 "num_base_bdevs": 3, 00:16:06.602 "num_base_bdevs_discovered": 2, 00:16:06.602 "num_base_bdevs_operational": 2, 00:16:06.602 "base_bdevs_list": [ 00:16:06.602 { 00:16:06.602 "name": null, 00:16:06.602 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.602 "is_configured": false, 00:16:06.602 "data_offset": 2048, 00:16:06.602 "data_size": 63488 00:16:06.602 }, 00:16:06.602 { 00:16:06.602 "name": "pt2", 00:16:06.602 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:06.602 "is_configured": true, 00:16:06.602 "data_offset": 2048, 00:16:06.602 "data_size": 63488 00:16:06.602 }, 00:16:06.602 { 00:16:06.602 "name": "pt3", 00:16:06.602 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:06.602 "is_configured": true, 00:16:06.602 "data_offset": 2048, 00:16:06.602 "data_size": 63488 00:16:06.602 } 00:16:06.602 ] 00:16:06.602 }' 00:16:06.602 03:18:49 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.602 03:18:49 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.172 03:18:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:16:07.172 03:18:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.172 03:18:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:07.172 03:18:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.172 03:18:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.172 03:18:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:16:07.172 03:18:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:16:07.172 03:18:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:07.172 03:18:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.172 03:18:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:07.172 [2024-10-09 03:18:50.227952] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:07.172 03:18:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.172 03:18:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' f407df48-88b7-482c-a257-ea88ccfea44a '!=' f407df48-88b7-482c-a257-ea88ccfea44a ']' 00:16:07.172 03:18:50 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81405 00:16:07.172 03:18:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 81405 ']' 00:16:07.172 03:18:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 81405 00:16:07.172 03:18:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:16:07.172 03:18:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:07.172 03:18:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81405 00:16:07.172 03:18:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:07.172 03:18:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:07.172 03:18:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81405' 00:16:07.172 killing process with pid 81405 00:16:07.172 03:18:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 81405 00:16:07.172 [2024-10-09 03:18:50.291525] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:07.172 [2024-10-09 03:18:50.291625] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:07.172 [2024-10-09 03:18:50.291684] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:07.172 [2024-10-09 03:18:50.291696] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:16:07.172 03:18:50 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 81405 00:16:07.432 [2024-10-09 03:18:50.620907] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:08.813 03:18:51 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:16:08.813 00:16:08.813 real 0m8.060s 00:16:08.813 user 0m12.245s 00:16:08.813 sys 0m1.464s 00:16:08.813 03:18:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:08.813 ************************************ 00:16:08.813 END TEST raid5f_superblock_test 00:16:08.813 ************************************ 00:16:08.813 03:18:51 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:08.813 03:18:52 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:16:08.813 03:18:52 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:16:08.813 03:18:52 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:08.813 03:18:52 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:08.813 03:18:52 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:08.813 ************************************ 00:16:08.813 START TEST raid5f_rebuild_test 00:16:08.813 ************************************ 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 false false true 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81849 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81849 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 81849 ']' 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:08.813 03:18:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.073 [2024-10-09 03:18:52.165007] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:16:09.073 [2024-10-09 03:18:52.165173] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:09.073 Zero copy mechanism will not be used. 00:16:09.073 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81849 ] 00:16:09.073 [2024-10-09 03:18:52.324098] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.334 [2024-10-09 03:18:52.584858] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:09.593 [2024-10-09 03:18:52.828234] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:09.593 [2024-10-09 03:18:52.828366] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:09.853 03:18:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:09.853 03:18:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:16:09.853 03:18:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:09.853 03:18:52 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:09.853 03:18:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.853 03:18:52 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.853 BaseBdev1_malloc 00:16:09.853 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.853 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:09.853 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.853 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.853 [2024-10-09 03:18:53.049656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:09.853 [2024-10-09 03:18:53.049742] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.853 [2024-10-09 03:18:53.049767] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:09.853 [2024-10-09 03:18:53.049784] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.853 [2024-10-09 03:18:53.052215] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.853 [2024-10-09 03:18:53.052254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:09.853 BaseBdev1 00:16:09.853 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.853 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:09.853 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:09.853 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.853 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.853 BaseBdev2_malloc 00:16:09.854 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.854 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:09.854 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.854 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:09.854 [2024-10-09 03:18:53.137632] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:09.854 [2024-10-09 03:18:53.137777] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.854 [2024-10-09 03:18:53.137813] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:09.854 [2024-10-09 03:18:53.137854] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.854 [2024-10-09 03:18:53.140240] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.854 [2024-10-09 03:18:53.140317] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:09.854 BaseBdev2 00:16:09.854 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.854 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:09.854 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:09.854 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.854 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.114 BaseBdev3_malloc 00:16:10.114 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.114 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:10.114 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.114 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.114 [2024-10-09 03:18:53.195521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:10.114 [2024-10-09 03:18:53.195628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.114 [2024-10-09 03:18:53.195663] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:10.114 [2024-10-09 03:18:53.195694] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.114 [2024-10-09 03:18:53.198091] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.114 [2024-10-09 03:18:53.198169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:10.114 BaseBdev3 00:16:10.114 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.114 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:10.114 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.114 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.114 spare_malloc 00:16:10.114 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.114 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:10.114 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.114 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.114 spare_delay 00:16:10.114 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.114 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:10.114 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.115 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.115 [2024-10-09 03:18:53.265254] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:10.115 [2024-10-09 03:18:53.265364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:10.115 [2024-10-09 03:18:53.265386] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:10.115 [2024-10-09 03:18:53.265398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:10.115 [2024-10-09 03:18:53.267775] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:10.115 [2024-10-09 03:18:53.267818] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:10.115 spare 00:16:10.115 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.115 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:16:10.115 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.115 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.115 [2024-10-09 03:18:53.277308] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:10.115 [2024-10-09 03:18:53.279393] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:10.115 [2024-10-09 03:18:53.279509] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:10.115 [2024-10-09 03:18:53.279626] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:10.115 [2024-10-09 03:18:53.279669] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:10.115 [2024-10-09 03:18:53.279957] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:10.115 [2024-10-09 03:18:53.285347] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:10.115 [2024-10-09 03:18:53.285410] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:10.115 [2024-10-09 03:18:53.285623] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.115 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.115 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:10.115 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.115 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.115 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:10.115 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:10.115 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:10.115 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.115 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.115 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.115 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.115 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.115 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.115 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.115 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.115 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.115 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.115 "name": "raid_bdev1", 00:16:10.115 "uuid": "8d57d9e2-5915-4bdc-93ff-5a77bdd1df44", 00:16:10.115 "strip_size_kb": 64, 00:16:10.115 "state": "online", 00:16:10.115 "raid_level": "raid5f", 00:16:10.115 "superblock": false, 00:16:10.115 "num_base_bdevs": 3, 00:16:10.115 "num_base_bdevs_discovered": 3, 00:16:10.115 "num_base_bdevs_operational": 3, 00:16:10.115 "base_bdevs_list": [ 00:16:10.115 { 00:16:10.115 "name": "BaseBdev1", 00:16:10.115 "uuid": "0b435052-521f-5b3d-be1c-7b841e28732f", 00:16:10.115 "is_configured": true, 00:16:10.115 "data_offset": 0, 00:16:10.115 "data_size": 65536 00:16:10.115 }, 00:16:10.115 { 00:16:10.115 "name": "BaseBdev2", 00:16:10.115 "uuid": "614e75fe-dd5a-5b2b-b9b8-8dda30343475", 00:16:10.115 "is_configured": true, 00:16:10.115 "data_offset": 0, 00:16:10.115 "data_size": 65536 00:16:10.115 }, 00:16:10.115 { 00:16:10.115 "name": "BaseBdev3", 00:16:10.115 "uuid": "5d56c879-a480-5f75-a7e6-3982f93ca2d5", 00:16:10.115 "is_configured": true, 00:16:10.115 "data_offset": 0, 00:16:10.115 "data_size": 65536 00:16:10.115 } 00:16:10.115 ] 00:16:10.115 }' 00:16:10.115 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.115 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.686 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:10.686 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:10.686 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.686 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.686 [2024-10-09 03:18:53.736285] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:10.686 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.686 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:16:10.686 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:10.686 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.686 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.686 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:10.686 03:18:53 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.686 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:10.686 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:10.686 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:10.686 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:10.686 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:10.686 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:10.686 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:10.686 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:10.686 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:10.686 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:10.686 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:10.686 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:10.686 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:10.686 03:18:53 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:10.946 [2024-10-09 03:18:54.031811] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:10.946 /dev/nbd0 00:16:10.946 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:10.946 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:10.946 03:18:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:10.946 03:18:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:16:10.946 03:18:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:10.946 03:18:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:10.946 03:18:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:10.946 03:18:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:16:10.946 03:18:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:10.946 03:18:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:10.946 03:18:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:10.946 1+0 records in 00:16:10.946 1+0 records out 00:16:10.946 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282018 s, 14.5 MB/s 00:16:10.946 03:18:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:10.946 03:18:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:16:10.947 03:18:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:10.947 03:18:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:10.947 03:18:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:16:10.947 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:10.947 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:10.947 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:10.947 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:10.947 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:10.947 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:16:11.517 512+0 records in 00:16:11.517 512+0 records out 00:16:11.517 67108864 bytes (67 MB, 64 MiB) copied, 0.519561 s, 129 MB/s 00:16:11.517 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:11.517 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:11.517 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:11.517 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:11.517 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:11.517 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:11.517 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:11.777 [2024-10-09 03:18:54.839184] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:11.777 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:11.777 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:11.777 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:11.777 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:11.777 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:11.777 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:11.777 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:11.777 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:11.777 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:11.777 03:18:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.777 03:18:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.777 [2024-10-09 03:18:54.866503] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:11.777 03:18:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.777 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:11.777 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:11.777 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:11.778 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:11.778 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:11.778 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:11.778 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:11.778 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:11.778 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:11.778 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:11.778 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.778 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.778 03:18:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.778 03:18:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:11.778 03:18:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.778 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:11.778 "name": "raid_bdev1", 00:16:11.778 "uuid": "8d57d9e2-5915-4bdc-93ff-5a77bdd1df44", 00:16:11.778 "strip_size_kb": 64, 00:16:11.778 "state": "online", 00:16:11.778 "raid_level": "raid5f", 00:16:11.778 "superblock": false, 00:16:11.778 "num_base_bdevs": 3, 00:16:11.778 "num_base_bdevs_discovered": 2, 00:16:11.778 "num_base_bdevs_operational": 2, 00:16:11.778 "base_bdevs_list": [ 00:16:11.778 { 00:16:11.778 "name": null, 00:16:11.778 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.778 "is_configured": false, 00:16:11.778 "data_offset": 0, 00:16:11.778 "data_size": 65536 00:16:11.778 }, 00:16:11.778 { 00:16:11.778 "name": "BaseBdev2", 00:16:11.778 "uuid": "614e75fe-dd5a-5b2b-b9b8-8dda30343475", 00:16:11.778 "is_configured": true, 00:16:11.778 "data_offset": 0, 00:16:11.778 "data_size": 65536 00:16:11.778 }, 00:16:11.778 { 00:16:11.778 "name": "BaseBdev3", 00:16:11.778 "uuid": "5d56c879-a480-5f75-a7e6-3982f93ca2d5", 00:16:11.778 "is_configured": true, 00:16:11.778 "data_offset": 0, 00:16:11.778 "data_size": 65536 00:16:11.778 } 00:16:11.778 ] 00:16:11.778 }' 00:16:11.778 03:18:54 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:11.778 03:18:54 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.038 03:18:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:12.038 03:18:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.038 03:18:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:12.038 [2024-10-09 03:18:55.333950] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:12.298 [2024-10-09 03:18:55.348012] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:16:12.298 03:18:55 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.298 03:18:55 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:12.298 [2024-10-09 03:18:55.355233] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:13.238 03:18:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:13.238 03:18:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.238 03:18:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:13.238 03:18:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:13.238 03:18:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.238 03:18:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.238 03:18:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.238 03:18:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.238 03:18:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.239 03:18:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.239 03:18:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.239 "name": "raid_bdev1", 00:16:13.239 "uuid": "8d57d9e2-5915-4bdc-93ff-5a77bdd1df44", 00:16:13.239 "strip_size_kb": 64, 00:16:13.239 "state": "online", 00:16:13.239 "raid_level": "raid5f", 00:16:13.239 "superblock": false, 00:16:13.239 "num_base_bdevs": 3, 00:16:13.239 "num_base_bdevs_discovered": 3, 00:16:13.239 "num_base_bdevs_operational": 3, 00:16:13.239 "process": { 00:16:13.239 "type": "rebuild", 00:16:13.239 "target": "spare", 00:16:13.239 "progress": { 00:16:13.239 "blocks": 20480, 00:16:13.239 "percent": 15 00:16:13.239 } 00:16:13.239 }, 00:16:13.239 "base_bdevs_list": [ 00:16:13.239 { 00:16:13.239 "name": "spare", 00:16:13.239 "uuid": "3ebfad04-8ed6-590e-bc0b-bb7095fa4d96", 00:16:13.239 "is_configured": true, 00:16:13.239 "data_offset": 0, 00:16:13.239 "data_size": 65536 00:16:13.239 }, 00:16:13.239 { 00:16:13.239 "name": "BaseBdev2", 00:16:13.239 "uuid": "614e75fe-dd5a-5b2b-b9b8-8dda30343475", 00:16:13.239 "is_configured": true, 00:16:13.239 "data_offset": 0, 00:16:13.239 "data_size": 65536 00:16:13.239 }, 00:16:13.239 { 00:16:13.239 "name": "BaseBdev3", 00:16:13.239 "uuid": "5d56c879-a480-5f75-a7e6-3982f93ca2d5", 00:16:13.239 "is_configured": true, 00:16:13.239 "data_offset": 0, 00:16:13.239 "data_size": 65536 00:16:13.239 } 00:16:13.239 ] 00:16:13.239 }' 00:16:13.239 03:18:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.239 03:18:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:13.239 03:18:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.239 03:18:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:13.239 03:18:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:13.239 03:18:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.239 03:18:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.239 [2024-10-09 03:18:56.498523] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:13.499 [2024-10-09 03:18:56.564756] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:13.499 [2024-10-09 03:18:56.564847] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:13.499 [2024-10-09 03:18:56.564870] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:13.499 [2024-10-09 03:18:56.564880] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:13.499 03:18:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.499 03:18:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:13.499 03:18:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:13.499 03:18:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:13.499 03:18:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:13.499 03:18:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:13.499 03:18:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:13.499 03:18:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:13.499 03:18:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:13.499 03:18:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:13.499 03:18:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:13.499 03:18:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.499 03:18:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.499 03:18:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.499 03:18:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.499 03:18:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.499 03:18:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:13.499 "name": "raid_bdev1", 00:16:13.499 "uuid": "8d57d9e2-5915-4bdc-93ff-5a77bdd1df44", 00:16:13.499 "strip_size_kb": 64, 00:16:13.499 "state": "online", 00:16:13.499 "raid_level": "raid5f", 00:16:13.499 "superblock": false, 00:16:13.499 "num_base_bdevs": 3, 00:16:13.499 "num_base_bdevs_discovered": 2, 00:16:13.499 "num_base_bdevs_operational": 2, 00:16:13.499 "base_bdevs_list": [ 00:16:13.499 { 00:16:13.499 "name": null, 00:16:13.499 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.499 "is_configured": false, 00:16:13.499 "data_offset": 0, 00:16:13.499 "data_size": 65536 00:16:13.499 }, 00:16:13.499 { 00:16:13.499 "name": "BaseBdev2", 00:16:13.499 "uuid": "614e75fe-dd5a-5b2b-b9b8-8dda30343475", 00:16:13.499 "is_configured": true, 00:16:13.499 "data_offset": 0, 00:16:13.499 "data_size": 65536 00:16:13.500 }, 00:16:13.500 { 00:16:13.500 "name": "BaseBdev3", 00:16:13.500 "uuid": "5d56c879-a480-5f75-a7e6-3982f93ca2d5", 00:16:13.500 "is_configured": true, 00:16:13.500 "data_offset": 0, 00:16:13.500 "data_size": 65536 00:16:13.500 } 00:16:13.500 ] 00:16:13.500 }' 00:16:13.500 03:18:56 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:13.500 03:18:56 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.759 03:18:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:13.759 03:18:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:13.759 03:18:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:13.759 03:18:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:13.759 03:18:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:13.759 03:18:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:13.759 03:18:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.759 03:18:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.759 03:18:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:13.759 03:18:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.018 03:18:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.018 "name": "raid_bdev1", 00:16:14.018 "uuid": "8d57d9e2-5915-4bdc-93ff-5a77bdd1df44", 00:16:14.018 "strip_size_kb": 64, 00:16:14.018 "state": "online", 00:16:14.018 "raid_level": "raid5f", 00:16:14.018 "superblock": false, 00:16:14.018 "num_base_bdevs": 3, 00:16:14.018 "num_base_bdevs_discovered": 2, 00:16:14.018 "num_base_bdevs_operational": 2, 00:16:14.018 "base_bdevs_list": [ 00:16:14.018 { 00:16:14.018 "name": null, 00:16:14.018 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.018 "is_configured": false, 00:16:14.018 "data_offset": 0, 00:16:14.018 "data_size": 65536 00:16:14.018 }, 00:16:14.018 { 00:16:14.018 "name": "BaseBdev2", 00:16:14.018 "uuid": "614e75fe-dd5a-5b2b-b9b8-8dda30343475", 00:16:14.018 "is_configured": true, 00:16:14.018 "data_offset": 0, 00:16:14.018 "data_size": 65536 00:16:14.018 }, 00:16:14.018 { 00:16:14.018 "name": "BaseBdev3", 00:16:14.018 "uuid": "5d56c879-a480-5f75-a7e6-3982f93ca2d5", 00:16:14.018 "is_configured": true, 00:16:14.018 "data_offset": 0, 00:16:14.019 "data_size": 65536 00:16:14.019 } 00:16:14.019 ] 00:16:14.019 }' 00:16:14.019 03:18:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.019 03:18:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:14.019 03:18:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.019 03:18:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:14.019 03:18:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:14.019 03:18:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.019 03:18:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.019 [2024-10-09 03:18:57.187877] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:14.019 [2024-10-09 03:18:57.203636] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:16:14.019 03:18:57 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.019 03:18:57 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:14.019 [2024-10-09 03:18:57.212074] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:14.958 03:18:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:14.958 03:18:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.959 03:18:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:14.959 03:18:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:14.959 03:18:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.959 03:18:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.959 03:18:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.959 03:18:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.959 03:18:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:14.959 03:18:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.219 03:18:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.219 "name": "raid_bdev1", 00:16:15.219 "uuid": "8d57d9e2-5915-4bdc-93ff-5a77bdd1df44", 00:16:15.219 "strip_size_kb": 64, 00:16:15.219 "state": "online", 00:16:15.219 "raid_level": "raid5f", 00:16:15.219 "superblock": false, 00:16:15.219 "num_base_bdevs": 3, 00:16:15.219 "num_base_bdevs_discovered": 3, 00:16:15.219 "num_base_bdevs_operational": 3, 00:16:15.219 "process": { 00:16:15.219 "type": "rebuild", 00:16:15.219 "target": "spare", 00:16:15.219 "progress": { 00:16:15.219 "blocks": 20480, 00:16:15.219 "percent": 15 00:16:15.219 } 00:16:15.219 }, 00:16:15.219 "base_bdevs_list": [ 00:16:15.219 { 00:16:15.219 "name": "spare", 00:16:15.219 "uuid": "3ebfad04-8ed6-590e-bc0b-bb7095fa4d96", 00:16:15.219 "is_configured": true, 00:16:15.219 "data_offset": 0, 00:16:15.219 "data_size": 65536 00:16:15.219 }, 00:16:15.219 { 00:16:15.219 "name": "BaseBdev2", 00:16:15.219 "uuid": "614e75fe-dd5a-5b2b-b9b8-8dda30343475", 00:16:15.219 "is_configured": true, 00:16:15.219 "data_offset": 0, 00:16:15.219 "data_size": 65536 00:16:15.219 }, 00:16:15.219 { 00:16:15.219 "name": "BaseBdev3", 00:16:15.219 "uuid": "5d56c879-a480-5f75-a7e6-3982f93ca2d5", 00:16:15.219 "is_configured": true, 00:16:15.219 "data_offset": 0, 00:16:15.219 "data_size": 65536 00:16:15.219 } 00:16:15.219 ] 00:16:15.219 }' 00:16:15.219 03:18:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.219 03:18:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:15.219 03:18:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.219 03:18:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:15.219 03:18:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:15.219 03:18:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:15.219 03:18:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:15.219 03:18:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=565 00:16:15.219 03:18:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:15.219 03:18:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:15.219 03:18:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:15.219 03:18:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:15.219 03:18:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:15.219 03:18:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:15.219 03:18:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:15.219 03:18:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:15.219 03:18:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.220 03:18:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:15.220 03:18:58 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.220 03:18:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:15.220 "name": "raid_bdev1", 00:16:15.220 "uuid": "8d57d9e2-5915-4bdc-93ff-5a77bdd1df44", 00:16:15.220 "strip_size_kb": 64, 00:16:15.220 "state": "online", 00:16:15.220 "raid_level": "raid5f", 00:16:15.220 "superblock": false, 00:16:15.220 "num_base_bdevs": 3, 00:16:15.220 "num_base_bdevs_discovered": 3, 00:16:15.220 "num_base_bdevs_operational": 3, 00:16:15.220 "process": { 00:16:15.220 "type": "rebuild", 00:16:15.220 "target": "spare", 00:16:15.220 "progress": { 00:16:15.220 "blocks": 22528, 00:16:15.220 "percent": 17 00:16:15.220 } 00:16:15.220 }, 00:16:15.220 "base_bdevs_list": [ 00:16:15.220 { 00:16:15.220 "name": "spare", 00:16:15.220 "uuid": "3ebfad04-8ed6-590e-bc0b-bb7095fa4d96", 00:16:15.220 "is_configured": true, 00:16:15.220 "data_offset": 0, 00:16:15.220 "data_size": 65536 00:16:15.220 }, 00:16:15.220 { 00:16:15.220 "name": "BaseBdev2", 00:16:15.220 "uuid": "614e75fe-dd5a-5b2b-b9b8-8dda30343475", 00:16:15.220 "is_configured": true, 00:16:15.220 "data_offset": 0, 00:16:15.220 "data_size": 65536 00:16:15.220 }, 00:16:15.220 { 00:16:15.220 "name": "BaseBdev3", 00:16:15.220 "uuid": "5d56c879-a480-5f75-a7e6-3982f93ca2d5", 00:16:15.220 "is_configured": true, 00:16:15.220 "data_offset": 0, 00:16:15.220 "data_size": 65536 00:16:15.220 } 00:16:15.220 ] 00:16:15.220 }' 00:16:15.220 03:18:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:15.220 03:18:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:15.220 03:18:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:15.220 03:18:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:15.220 03:18:58 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:16.604 03:18:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:16.604 03:18:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:16.604 03:18:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:16.604 03:18:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:16.604 03:18:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:16.604 03:18:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:16.604 03:18:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:16.604 03:18:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:16.604 03:18:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.604 03:18:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:16.604 03:18:59 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.604 03:18:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:16.604 "name": "raid_bdev1", 00:16:16.604 "uuid": "8d57d9e2-5915-4bdc-93ff-5a77bdd1df44", 00:16:16.604 "strip_size_kb": 64, 00:16:16.604 "state": "online", 00:16:16.604 "raid_level": "raid5f", 00:16:16.604 "superblock": false, 00:16:16.604 "num_base_bdevs": 3, 00:16:16.604 "num_base_bdevs_discovered": 3, 00:16:16.604 "num_base_bdevs_operational": 3, 00:16:16.604 "process": { 00:16:16.604 "type": "rebuild", 00:16:16.604 "target": "spare", 00:16:16.604 "progress": { 00:16:16.604 "blocks": 45056, 00:16:16.604 "percent": 34 00:16:16.604 } 00:16:16.604 }, 00:16:16.604 "base_bdevs_list": [ 00:16:16.604 { 00:16:16.604 "name": "spare", 00:16:16.604 "uuid": "3ebfad04-8ed6-590e-bc0b-bb7095fa4d96", 00:16:16.604 "is_configured": true, 00:16:16.604 "data_offset": 0, 00:16:16.604 "data_size": 65536 00:16:16.604 }, 00:16:16.604 { 00:16:16.604 "name": "BaseBdev2", 00:16:16.604 "uuid": "614e75fe-dd5a-5b2b-b9b8-8dda30343475", 00:16:16.604 "is_configured": true, 00:16:16.604 "data_offset": 0, 00:16:16.604 "data_size": 65536 00:16:16.604 }, 00:16:16.604 { 00:16:16.604 "name": "BaseBdev3", 00:16:16.604 "uuid": "5d56c879-a480-5f75-a7e6-3982f93ca2d5", 00:16:16.604 "is_configured": true, 00:16:16.604 "data_offset": 0, 00:16:16.604 "data_size": 65536 00:16:16.604 } 00:16:16.604 ] 00:16:16.604 }' 00:16:16.604 03:18:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:16.604 03:18:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:16.604 03:18:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:16.604 03:18:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:16.604 03:18:59 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:17.547 03:19:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:17.547 03:19:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:17.547 03:19:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:17.547 03:19:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:17.547 03:19:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:17.547 03:19:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:17.547 03:19:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.547 03:19:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.547 03:19:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.547 03:19:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.547 03:19:00 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.547 03:19:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:17.547 "name": "raid_bdev1", 00:16:17.547 "uuid": "8d57d9e2-5915-4bdc-93ff-5a77bdd1df44", 00:16:17.547 "strip_size_kb": 64, 00:16:17.547 "state": "online", 00:16:17.547 "raid_level": "raid5f", 00:16:17.547 "superblock": false, 00:16:17.547 "num_base_bdevs": 3, 00:16:17.547 "num_base_bdevs_discovered": 3, 00:16:17.547 "num_base_bdevs_operational": 3, 00:16:17.547 "process": { 00:16:17.547 "type": "rebuild", 00:16:17.547 "target": "spare", 00:16:17.547 "progress": { 00:16:17.547 "blocks": 69632, 00:16:17.547 "percent": 53 00:16:17.547 } 00:16:17.547 }, 00:16:17.547 "base_bdevs_list": [ 00:16:17.547 { 00:16:17.547 "name": "spare", 00:16:17.547 "uuid": "3ebfad04-8ed6-590e-bc0b-bb7095fa4d96", 00:16:17.547 "is_configured": true, 00:16:17.547 "data_offset": 0, 00:16:17.547 "data_size": 65536 00:16:17.547 }, 00:16:17.547 { 00:16:17.547 "name": "BaseBdev2", 00:16:17.547 "uuid": "614e75fe-dd5a-5b2b-b9b8-8dda30343475", 00:16:17.547 "is_configured": true, 00:16:17.547 "data_offset": 0, 00:16:17.547 "data_size": 65536 00:16:17.547 }, 00:16:17.547 { 00:16:17.547 "name": "BaseBdev3", 00:16:17.547 "uuid": "5d56c879-a480-5f75-a7e6-3982f93ca2d5", 00:16:17.547 "is_configured": true, 00:16:17.547 "data_offset": 0, 00:16:17.547 "data_size": 65536 00:16:17.547 } 00:16:17.547 ] 00:16:17.547 }' 00:16:17.547 03:19:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:17.547 03:19:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:17.547 03:19:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:17.547 03:19:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:17.547 03:19:00 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:18.493 03:19:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:18.493 03:19:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:18.494 03:19:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:18.494 03:19:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:18.494 03:19:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:18.494 03:19:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:18.494 03:19:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.494 03:19:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.494 03:19:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.494 03:19:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:18.494 03:19:01 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.754 03:19:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:18.754 "name": "raid_bdev1", 00:16:18.754 "uuid": "8d57d9e2-5915-4bdc-93ff-5a77bdd1df44", 00:16:18.754 "strip_size_kb": 64, 00:16:18.754 "state": "online", 00:16:18.754 "raid_level": "raid5f", 00:16:18.754 "superblock": false, 00:16:18.754 "num_base_bdevs": 3, 00:16:18.754 "num_base_bdevs_discovered": 3, 00:16:18.754 "num_base_bdevs_operational": 3, 00:16:18.754 "process": { 00:16:18.754 "type": "rebuild", 00:16:18.754 "target": "spare", 00:16:18.754 "progress": { 00:16:18.754 "blocks": 92160, 00:16:18.754 "percent": 70 00:16:18.754 } 00:16:18.754 }, 00:16:18.754 "base_bdevs_list": [ 00:16:18.754 { 00:16:18.754 "name": "spare", 00:16:18.754 "uuid": "3ebfad04-8ed6-590e-bc0b-bb7095fa4d96", 00:16:18.754 "is_configured": true, 00:16:18.754 "data_offset": 0, 00:16:18.754 "data_size": 65536 00:16:18.754 }, 00:16:18.754 { 00:16:18.754 "name": "BaseBdev2", 00:16:18.754 "uuid": "614e75fe-dd5a-5b2b-b9b8-8dda30343475", 00:16:18.754 "is_configured": true, 00:16:18.754 "data_offset": 0, 00:16:18.754 "data_size": 65536 00:16:18.754 }, 00:16:18.754 { 00:16:18.754 "name": "BaseBdev3", 00:16:18.754 "uuid": "5d56c879-a480-5f75-a7e6-3982f93ca2d5", 00:16:18.754 "is_configured": true, 00:16:18.754 "data_offset": 0, 00:16:18.754 "data_size": 65536 00:16:18.754 } 00:16:18.754 ] 00:16:18.754 }' 00:16:18.754 03:19:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:18.754 03:19:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:18.754 03:19:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:18.754 03:19:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:18.754 03:19:01 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:19.695 03:19:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:19.695 03:19:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:19.695 03:19:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:19.695 03:19:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:19.695 03:19:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:19.695 03:19:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:19.695 03:19:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:19.695 03:19:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:19.695 03:19:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.695 03:19:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:19.695 03:19:02 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.695 03:19:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:19.695 "name": "raid_bdev1", 00:16:19.695 "uuid": "8d57d9e2-5915-4bdc-93ff-5a77bdd1df44", 00:16:19.695 "strip_size_kb": 64, 00:16:19.695 "state": "online", 00:16:19.695 "raid_level": "raid5f", 00:16:19.695 "superblock": false, 00:16:19.695 "num_base_bdevs": 3, 00:16:19.695 "num_base_bdevs_discovered": 3, 00:16:19.695 "num_base_bdevs_operational": 3, 00:16:19.695 "process": { 00:16:19.695 "type": "rebuild", 00:16:19.695 "target": "spare", 00:16:19.695 "progress": { 00:16:19.695 "blocks": 114688, 00:16:19.695 "percent": 87 00:16:19.695 } 00:16:19.695 }, 00:16:19.695 "base_bdevs_list": [ 00:16:19.695 { 00:16:19.695 "name": "spare", 00:16:19.695 "uuid": "3ebfad04-8ed6-590e-bc0b-bb7095fa4d96", 00:16:19.695 "is_configured": true, 00:16:19.695 "data_offset": 0, 00:16:19.695 "data_size": 65536 00:16:19.695 }, 00:16:19.695 { 00:16:19.695 "name": "BaseBdev2", 00:16:19.695 "uuid": "614e75fe-dd5a-5b2b-b9b8-8dda30343475", 00:16:19.695 "is_configured": true, 00:16:19.695 "data_offset": 0, 00:16:19.695 "data_size": 65536 00:16:19.695 }, 00:16:19.695 { 00:16:19.695 "name": "BaseBdev3", 00:16:19.695 "uuid": "5d56c879-a480-5f75-a7e6-3982f93ca2d5", 00:16:19.695 "is_configured": true, 00:16:19.695 "data_offset": 0, 00:16:19.695 "data_size": 65536 00:16:19.695 } 00:16:19.695 ] 00:16:19.695 }' 00:16:19.696 03:19:02 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:19.956 03:19:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:19.956 03:19:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:19.956 03:19:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:19.956 03:19:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:20.526 [2024-10-09 03:19:03.661074] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:20.526 [2024-10-09 03:19:03.661261] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:20.526 [2024-10-09 03:19:03.661313] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.787 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:20.787 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:20.787 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.787 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:20.787 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:20.787 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.787 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.787 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.787 03:19:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.787 03:19:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:20.787 03:19:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.046 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.046 "name": "raid_bdev1", 00:16:21.046 "uuid": "8d57d9e2-5915-4bdc-93ff-5a77bdd1df44", 00:16:21.046 "strip_size_kb": 64, 00:16:21.046 "state": "online", 00:16:21.046 "raid_level": "raid5f", 00:16:21.046 "superblock": false, 00:16:21.046 "num_base_bdevs": 3, 00:16:21.046 "num_base_bdevs_discovered": 3, 00:16:21.046 "num_base_bdevs_operational": 3, 00:16:21.046 "base_bdevs_list": [ 00:16:21.046 { 00:16:21.046 "name": "spare", 00:16:21.046 "uuid": "3ebfad04-8ed6-590e-bc0b-bb7095fa4d96", 00:16:21.046 "is_configured": true, 00:16:21.046 "data_offset": 0, 00:16:21.046 "data_size": 65536 00:16:21.046 }, 00:16:21.046 { 00:16:21.046 "name": "BaseBdev2", 00:16:21.046 "uuid": "614e75fe-dd5a-5b2b-b9b8-8dda30343475", 00:16:21.046 "is_configured": true, 00:16:21.046 "data_offset": 0, 00:16:21.046 "data_size": 65536 00:16:21.046 }, 00:16:21.046 { 00:16:21.046 "name": "BaseBdev3", 00:16:21.046 "uuid": "5d56c879-a480-5f75-a7e6-3982f93ca2d5", 00:16:21.046 "is_configured": true, 00:16:21.046 "data_offset": 0, 00:16:21.046 "data_size": 65536 00:16:21.046 } 00:16:21.046 ] 00:16:21.046 }' 00:16:21.046 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.046 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:21.046 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.046 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:21.046 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:16:21.046 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:21.046 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:21.046 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:21.046 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:21.046 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:21.046 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.046 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.046 03:19:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.046 03:19:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.046 03:19:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.046 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.046 "name": "raid_bdev1", 00:16:21.046 "uuid": "8d57d9e2-5915-4bdc-93ff-5a77bdd1df44", 00:16:21.046 "strip_size_kb": 64, 00:16:21.046 "state": "online", 00:16:21.046 "raid_level": "raid5f", 00:16:21.046 "superblock": false, 00:16:21.046 "num_base_bdevs": 3, 00:16:21.046 "num_base_bdevs_discovered": 3, 00:16:21.046 "num_base_bdevs_operational": 3, 00:16:21.046 "base_bdevs_list": [ 00:16:21.046 { 00:16:21.046 "name": "spare", 00:16:21.046 "uuid": "3ebfad04-8ed6-590e-bc0b-bb7095fa4d96", 00:16:21.046 "is_configured": true, 00:16:21.046 "data_offset": 0, 00:16:21.046 "data_size": 65536 00:16:21.046 }, 00:16:21.046 { 00:16:21.046 "name": "BaseBdev2", 00:16:21.046 "uuid": "614e75fe-dd5a-5b2b-b9b8-8dda30343475", 00:16:21.046 "is_configured": true, 00:16:21.046 "data_offset": 0, 00:16:21.046 "data_size": 65536 00:16:21.046 }, 00:16:21.046 { 00:16:21.046 "name": "BaseBdev3", 00:16:21.046 "uuid": "5d56c879-a480-5f75-a7e6-3982f93ca2d5", 00:16:21.046 "is_configured": true, 00:16:21.046 "data_offset": 0, 00:16:21.046 "data_size": 65536 00:16:21.046 } 00:16:21.046 ] 00:16:21.046 }' 00:16:21.046 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.046 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:21.046 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.046 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:21.046 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:21.046 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:21.046 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:21.046 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:21.046 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:21.046 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:21.046 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:21.046 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:21.046 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:21.046 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:21.046 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.046 03:19:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.046 03:19:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.046 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.046 03:19:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.305 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:21.305 "name": "raid_bdev1", 00:16:21.306 "uuid": "8d57d9e2-5915-4bdc-93ff-5a77bdd1df44", 00:16:21.306 "strip_size_kb": 64, 00:16:21.306 "state": "online", 00:16:21.306 "raid_level": "raid5f", 00:16:21.306 "superblock": false, 00:16:21.306 "num_base_bdevs": 3, 00:16:21.306 "num_base_bdevs_discovered": 3, 00:16:21.306 "num_base_bdevs_operational": 3, 00:16:21.306 "base_bdevs_list": [ 00:16:21.306 { 00:16:21.306 "name": "spare", 00:16:21.306 "uuid": "3ebfad04-8ed6-590e-bc0b-bb7095fa4d96", 00:16:21.306 "is_configured": true, 00:16:21.306 "data_offset": 0, 00:16:21.306 "data_size": 65536 00:16:21.306 }, 00:16:21.306 { 00:16:21.306 "name": "BaseBdev2", 00:16:21.306 "uuid": "614e75fe-dd5a-5b2b-b9b8-8dda30343475", 00:16:21.306 "is_configured": true, 00:16:21.306 "data_offset": 0, 00:16:21.306 "data_size": 65536 00:16:21.306 }, 00:16:21.306 { 00:16:21.306 "name": "BaseBdev3", 00:16:21.306 "uuid": "5d56c879-a480-5f75-a7e6-3982f93ca2d5", 00:16:21.306 "is_configured": true, 00:16:21.306 "data_offset": 0, 00:16:21.306 "data_size": 65536 00:16:21.306 } 00:16:21.306 ] 00:16:21.306 }' 00:16:21.306 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:21.306 03:19:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.566 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:21.566 03:19:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.566 03:19:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.566 [2024-10-09 03:19:04.772320] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:21.566 [2024-10-09 03:19:04.772364] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:21.566 [2024-10-09 03:19:04.772461] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:21.566 [2024-10-09 03:19:04.772554] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:21.566 [2024-10-09 03:19:04.772571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:21.566 03:19:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.566 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:21.566 03:19:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.566 03:19:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:21.566 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:16:21.566 03:19:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.566 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:21.566 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:21.566 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:21.566 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:21.566 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:21.566 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:21.566 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:21.566 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:21.566 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:21.566 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:16:21.566 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:21.566 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:21.566 03:19:04 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:21.827 /dev/nbd0 00:16:21.827 03:19:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:21.827 03:19:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:21.827 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:21.827 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:16:21.827 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:21.827 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:21.827 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:21.827 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:16:21.827 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:21.827 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:21.827 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:21.827 1+0 records in 00:16:21.827 1+0 records out 00:16:21.827 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212612 s, 19.3 MB/s 00:16:21.827 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:21.827 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:16:21.827 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:21.827 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:21.827 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:16:21.827 03:19:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:21.827 03:19:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:21.827 03:19:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:22.088 /dev/nbd1 00:16:22.088 03:19:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:22.088 03:19:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:22.088 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:22.088 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:16:22.088 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:22.088 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:22.088 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:22.088 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:16:22.088 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:22.088 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:22.088 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:22.088 1+0 records in 00:16:22.088 1+0 records out 00:16:22.088 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278432 s, 14.7 MB/s 00:16:22.088 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:22.088 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:16:22.088 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:22.088 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:22.088 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:16:22.088 03:19:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:22.088 03:19:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:22.088 03:19:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:22.348 03:19:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:22.348 03:19:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:22.348 03:19:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:22.348 03:19:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:22.348 03:19:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:16:22.348 03:19:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:22.348 03:19:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:22.609 03:19:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:22.609 03:19:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:22.609 03:19:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:22.609 03:19:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:22.609 03:19:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:22.609 03:19:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:22.609 03:19:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:22.609 03:19:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:22.609 03:19:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:22.609 03:19:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:22.868 03:19:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:22.868 03:19:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:22.868 03:19:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:22.868 03:19:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:22.868 03:19:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:22.868 03:19:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:22.868 03:19:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:16:22.868 03:19:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:16:22.868 03:19:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:22.868 03:19:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81849 00:16:22.868 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 81849 ']' 00:16:22.869 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 81849 00:16:22.869 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:16:22.869 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:22.869 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 81849 00:16:22.869 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:22.869 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:22.869 killing process with pid 81849 00:16:22.869 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 81849' 00:16:22.869 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 81849 00:16:22.869 Received shutdown signal, test time was about 60.000000 seconds 00:16:22.869 00:16:22.869 Latency(us) 00:16:22.869 [2024-10-09T03:19:06.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:22.869 [2024-10-09T03:19:06.172Z] =================================================================================================================== 00:16:22.869 [2024-10-09T03:19:06.172Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:22.869 [2024-10-09 03:19:05.983873] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:22.869 03:19:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 81849 00:16:23.128 [2024-10-09 03:19:06.418636] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:24.512 03:19:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:16:24.512 00:16:24.512 real 0m15.731s 00:16:24.512 user 0m19.027s 00:16:24.512 sys 0m2.295s 00:16:24.512 03:19:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:24.512 03:19:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.512 ************************************ 00:16:24.512 END TEST raid5f_rebuild_test 00:16:24.512 ************************************ 00:16:24.773 03:19:07 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:16:24.773 03:19:07 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:16:24.773 03:19:07 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:24.773 03:19:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:24.773 ************************************ 00:16:24.773 START TEST raid5f_rebuild_test_sb 00:16:24.773 ************************************ 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 3 true false true 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82294 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82294 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 82294 ']' 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:24.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:24.773 03:19:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:24.773 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:24.773 Zero copy mechanism will not be used. 00:16:24.773 [2024-10-09 03:19:07.982291] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:16:24.773 [2024-10-09 03:19:07.982417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82294 ] 00:16:25.033 [2024-10-09 03:19:08.146227] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:25.294 [2024-10-09 03:19:08.403736] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:25.554 [2024-10-09 03:19:08.637070] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:25.554 [2024-10-09 03:19:08.637116] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:25.554 03:19:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:25.554 03:19:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:16:25.554 03:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:25.554 03:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:25.554 03:19:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.554 03:19:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.815 BaseBdev1_malloc 00:16:25.815 03:19:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.815 03:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:25.815 03:19:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.815 03:19:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.815 [2024-10-09 03:19:08.864959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:25.815 [2024-10-09 03:19:08.865040] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.815 [2024-10-09 03:19:08.865066] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:25.815 [2024-10-09 03:19:08.865083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.815 [2024-10-09 03:19:08.867549] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.815 [2024-10-09 03:19:08.867595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:25.815 BaseBdev1 00:16:25.815 03:19:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.815 03:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:25.815 03:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:25.815 03:19:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.816 03:19:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.816 BaseBdev2_malloc 00:16:25.816 03:19:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.816 03:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:25.816 03:19:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.816 03:19:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.816 [2024-10-09 03:19:08.935702] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:25.816 [2024-10-09 03:19:08.935774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.816 [2024-10-09 03:19:08.935794] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:25.816 [2024-10-09 03:19:08.935809] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.816 [2024-10-09 03:19:08.938221] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.816 [2024-10-09 03:19:08.938279] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:25.816 BaseBdev2 00:16:25.816 03:19:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.816 03:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:25.816 03:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:25.816 03:19:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.816 03:19:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.816 BaseBdev3_malloc 00:16:25.816 03:19:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.816 03:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:25.816 03:19:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.816 03:19:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.816 [2024-10-09 03:19:08.995446] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:25.816 [2024-10-09 03:19:08.995511] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.816 [2024-10-09 03:19:08.995535] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:25.816 [2024-10-09 03:19:08.995546] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.816 [2024-10-09 03:19:08.997826] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.816 [2024-10-09 03:19:08.997875] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:25.816 BaseBdev3 00:16:25.816 03:19:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.816 03:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:25.816 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.816 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.816 spare_malloc 00:16:25.816 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.816 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:25.816 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.816 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.816 spare_delay 00:16:25.816 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.816 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:25.816 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.816 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.816 [2024-10-09 03:19:09.068008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:25.816 [2024-10-09 03:19:09.068061] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:25.816 [2024-10-09 03:19:09.068079] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:16:25.816 [2024-10-09 03:19:09.068090] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:25.816 [2024-10-09 03:19:09.070347] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:25.816 [2024-10-09 03:19:09.070388] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:25.816 spare 00:16:25.816 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.816 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:16:25.816 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.816 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.816 [2024-10-09 03:19:09.080074] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:25.816 [2024-10-09 03:19:09.082131] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:25.816 [2024-10-09 03:19:09.082205] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:25.816 [2024-10-09 03:19:09.082383] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:25.816 [2024-10-09 03:19:09.082395] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:25.816 [2024-10-09 03:19:09.082650] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:25.816 [2024-10-09 03:19:09.088122] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:25.816 [2024-10-09 03:19:09.088149] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:25.816 [2024-10-09 03:19:09.088337] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:25.816 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.816 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:25.816 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:25.816 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:25.816 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:25.816 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:25.816 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:25.816 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:25.816 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:25.816 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:25.816 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:25.816 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.816 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.816 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.816 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:25.816 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.076 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.076 "name": "raid_bdev1", 00:16:26.076 "uuid": "c84a5e2c-3ab9-4945-bd12-81d3b0302b48", 00:16:26.076 "strip_size_kb": 64, 00:16:26.076 "state": "online", 00:16:26.076 "raid_level": "raid5f", 00:16:26.076 "superblock": true, 00:16:26.076 "num_base_bdevs": 3, 00:16:26.076 "num_base_bdevs_discovered": 3, 00:16:26.076 "num_base_bdevs_operational": 3, 00:16:26.076 "base_bdevs_list": [ 00:16:26.076 { 00:16:26.076 "name": "BaseBdev1", 00:16:26.076 "uuid": "41ed4165-3b50-5cca-97ee-eb95553478d3", 00:16:26.076 "is_configured": true, 00:16:26.076 "data_offset": 2048, 00:16:26.076 "data_size": 63488 00:16:26.077 }, 00:16:26.077 { 00:16:26.077 "name": "BaseBdev2", 00:16:26.077 "uuid": "6d78a4a2-fac9-58a8-ac57-572bf084274c", 00:16:26.077 "is_configured": true, 00:16:26.077 "data_offset": 2048, 00:16:26.077 "data_size": 63488 00:16:26.077 }, 00:16:26.077 { 00:16:26.077 "name": "BaseBdev3", 00:16:26.077 "uuid": "825be5db-0e48-5cf3-8e36-a05a3228a00d", 00:16:26.077 "is_configured": true, 00:16:26.077 "data_offset": 2048, 00:16:26.077 "data_size": 63488 00:16:26.077 } 00:16:26.077 ] 00:16:26.077 }' 00:16:26.077 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.077 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.337 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:26.337 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.337 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.337 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:26.337 [2024-10-09 03:19:09.554614] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:26.337 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.337 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:16:26.337 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:26.337 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.337 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.337 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:26.337 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.337 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:26.337 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:16:26.337 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:16:26.337 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:16:26.337 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:16:26.337 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:26.337 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:16:26.337 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:26.337 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:26.337 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:26.337 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:26.337 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:26.337 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:26.337 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:16:26.598 [2024-10-09 03:19:09.818032] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:16:26.598 /dev/nbd0 00:16:26.598 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:26.598 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:26.598 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:26.598 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:26.598 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:26.598 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:26.598 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:26.598 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:26.598 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:26.598 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:26.598 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:26.598 1+0 records in 00:16:26.598 1+0 records out 00:16:26.598 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000297198 s, 13.8 MB/s 00:16:26.598 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:26.598 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:26.598 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:26.598 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:26.598 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:26.598 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:26.598 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:26.598 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:16:26.598 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:16:26.598 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:16:26.598 03:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:16:27.168 496+0 records in 00:16:27.168 496+0 records out 00:16:27.168 65011712 bytes (65 MB, 62 MiB) copied, 0.371236 s, 175 MB/s 00:16:27.168 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:27.168 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:27.168 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:27.168 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:27.168 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:27.168 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:27.168 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:27.427 [2024-10-09 03:19:10.478582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.427 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:27.427 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:27.427 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:27.427 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:27.427 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:27.427 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:27.427 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:27.427 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:27.427 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:27.427 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.427 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.427 [2024-10-09 03:19:10.513736] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:27.427 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.427 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:27.428 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:27.428 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:27.428 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:27.428 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:27.428 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:27.428 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:27.428 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:27.428 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:27.428 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:27.428 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.428 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.428 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.428 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:27.428 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.428 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:27.428 "name": "raid_bdev1", 00:16:27.428 "uuid": "c84a5e2c-3ab9-4945-bd12-81d3b0302b48", 00:16:27.428 "strip_size_kb": 64, 00:16:27.428 "state": "online", 00:16:27.428 "raid_level": "raid5f", 00:16:27.428 "superblock": true, 00:16:27.428 "num_base_bdevs": 3, 00:16:27.428 "num_base_bdevs_discovered": 2, 00:16:27.428 "num_base_bdevs_operational": 2, 00:16:27.428 "base_bdevs_list": [ 00:16:27.428 { 00:16:27.428 "name": null, 00:16:27.428 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:27.428 "is_configured": false, 00:16:27.428 "data_offset": 0, 00:16:27.428 "data_size": 63488 00:16:27.428 }, 00:16:27.428 { 00:16:27.428 "name": "BaseBdev2", 00:16:27.428 "uuid": "6d78a4a2-fac9-58a8-ac57-572bf084274c", 00:16:27.428 "is_configured": true, 00:16:27.428 "data_offset": 2048, 00:16:27.428 "data_size": 63488 00:16:27.428 }, 00:16:27.428 { 00:16:27.428 "name": "BaseBdev3", 00:16:27.428 "uuid": "825be5db-0e48-5cf3-8e36-a05a3228a00d", 00:16:27.428 "is_configured": true, 00:16:27.428 "data_offset": 2048, 00:16:27.428 "data_size": 63488 00:16:27.428 } 00:16:27.428 ] 00:16:27.428 }' 00:16:27.428 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:27.428 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.688 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:27.688 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.688 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:27.688 [2024-10-09 03:19:10.960955] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:27.688 [2024-10-09 03:19:10.975536] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:16:27.688 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.688 03:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:27.688 [2024-10-09 03:19:10.982826] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:29.072 03:19:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:29.072 03:19:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.072 03:19:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:29.072 03:19:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:29.072 03:19:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.072 03:19:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.072 03:19:11 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.072 03:19:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.072 03:19:11 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.072 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.072 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.072 "name": "raid_bdev1", 00:16:29.072 "uuid": "c84a5e2c-3ab9-4945-bd12-81d3b0302b48", 00:16:29.072 "strip_size_kb": 64, 00:16:29.072 "state": "online", 00:16:29.072 "raid_level": "raid5f", 00:16:29.072 "superblock": true, 00:16:29.072 "num_base_bdevs": 3, 00:16:29.072 "num_base_bdevs_discovered": 3, 00:16:29.072 "num_base_bdevs_operational": 3, 00:16:29.072 "process": { 00:16:29.072 "type": "rebuild", 00:16:29.072 "target": "spare", 00:16:29.072 "progress": { 00:16:29.072 "blocks": 20480, 00:16:29.072 "percent": 16 00:16:29.072 } 00:16:29.072 }, 00:16:29.072 "base_bdevs_list": [ 00:16:29.072 { 00:16:29.072 "name": "spare", 00:16:29.072 "uuid": "0608c717-10aa-5b37-b04b-cd71f45309ef", 00:16:29.072 "is_configured": true, 00:16:29.072 "data_offset": 2048, 00:16:29.072 "data_size": 63488 00:16:29.072 }, 00:16:29.072 { 00:16:29.072 "name": "BaseBdev2", 00:16:29.072 "uuid": "6d78a4a2-fac9-58a8-ac57-572bf084274c", 00:16:29.072 "is_configured": true, 00:16:29.072 "data_offset": 2048, 00:16:29.072 "data_size": 63488 00:16:29.072 }, 00:16:29.072 { 00:16:29.072 "name": "BaseBdev3", 00:16:29.072 "uuid": "825be5db-0e48-5cf3-8e36-a05a3228a00d", 00:16:29.072 "is_configured": true, 00:16:29.072 "data_offset": 2048, 00:16:29.072 "data_size": 63488 00:16:29.072 } 00:16:29.072 ] 00:16:29.072 }' 00:16:29.072 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.072 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:29.072 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.072 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:29.072 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:29.072 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.072 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.072 [2024-10-09 03:19:12.122450] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:29.072 [2024-10-09 03:19:12.192364] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:29.072 [2024-10-09 03:19:12.192448] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.072 [2024-10-09 03:19:12.192468] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:29.072 [2024-10-09 03:19:12.192477] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:29.072 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.072 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:29.072 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:29.072 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:29.072 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:29.072 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:29.072 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:29.072 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:29.072 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:29.072 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:29.073 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:29.073 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.073 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.073 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.073 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.073 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.073 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:29.073 "name": "raid_bdev1", 00:16:29.073 "uuid": "c84a5e2c-3ab9-4945-bd12-81d3b0302b48", 00:16:29.073 "strip_size_kb": 64, 00:16:29.073 "state": "online", 00:16:29.073 "raid_level": "raid5f", 00:16:29.073 "superblock": true, 00:16:29.073 "num_base_bdevs": 3, 00:16:29.073 "num_base_bdevs_discovered": 2, 00:16:29.073 "num_base_bdevs_operational": 2, 00:16:29.073 "base_bdevs_list": [ 00:16:29.073 { 00:16:29.073 "name": null, 00:16:29.073 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.073 "is_configured": false, 00:16:29.073 "data_offset": 0, 00:16:29.073 "data_size": 63488 00:16:29.073 }, 00:16:29.073 { 00:16:29.073 "name": "BaseBdev2", 00:16:29.073 "uuid": "6d78a4a2-fac9-58a8-ac57-572bf084274c", 00:16:29.073 "is_configured": true, 00:16:29.073 "data_offset": 2048, 00:16:29.073 "data_size": 63488 00:16:29.073 }, 00:16:29.073 { 00:16:29.073 "name": "BaseBdev3", 00:16:29.073 "uuid": "825be5db-0e48-5cf3-8e36-a05a3228a00d", 00:16:29.073 "is_configured": true, 00:16:29.073 "data_offset": 2048, 00:16:29.073 "data_size": 63488 00:16:29.073 } 00:16:29.073 ] 00:16:29.073 }' 00:16:29.073 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:29.073 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.665 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:29.665 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:29.665 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:29.665 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:29.665 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:29.665 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:29.665 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.665 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.666 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.666 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.666 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:29.666 "name": "raid_bdev1", 00:16:29.666 "uuid": "c84a5e2c-3ab9-4945-bd12-81d3b0302b48", 00:16:29.666 "strip_size_kb": 64, 00:16:29.666 "state": "online", 00:16:29.666 "raid_level": "raid5f", 00:16:29.666 "superblock": true, 00:16:29.666 "num_base_bdevs": 3, 00:16:29.666 "num_base_bdevs_discovered": 2, 00:16:29.666 "num_base_bdevs_operational": 2, 00:16:29.666 "base_bdevs_list": [ 00:16:29.666 { 00:16:29.666 "name": null, 00:16:29.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.666 "is_configured": false, 00:16:29.666 "data_offset": 0, 00:16:29.666 "data_size": 63488 00:16:29.666 }, 00:16:29.666 { 00:16:29.666 "name": "BaseBdev2", 00:16:29.666 "uuid": "6d78a4a2-fac9-58a8-ac57-572bf084274c", 00:16:29.666 "is_configured": true, 00:16:29.666 "data_offset": 2048, 00:16:29.666 "data_size": 63488 00:16:29.666 }, 00:16:29.666 { 00:16:29.666 "name": "BaseBdev3", 00:16:29.666 "uuid": "825be5db-0e48-5cf3-8e36-a05a3228a00d", 00:16:29.666 "is_configured": true, 00:16:29.666 "data_offset": 2048, 00:16:29.666 "data_size": 63488 00:16:29.666 } 00:16:29.666 ] 00:16:29.666 }' 00:16:29.666 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:29.666 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:29.666 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:29.666 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:29.666 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:29.666 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.666 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:29.666 [2024-10-09 03:19:12.781475] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:29.666 [2024-10-09 03:19:12.795230] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:16:29.666 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.666 03:19:12 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:29.666 [2024-10-09 03:19:12.802491] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:30.632 03:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:30.632 03:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.632 03:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:30.632 03:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:30.632 03:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.632 03:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.632 03:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.632 03:19:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.632 03:19:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.632 03:19:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.632 03:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.632 "name": "raid_bdev1", 00:16:30.632 "uuid": "c84a5e2c-3ab9-4945-bd12-81d3b0302b48", 00:16:30.632 "strip_size_kb": 64, 00:16:30.632 "state": "online", 00:16:30.632 "raid_level": "raid5f", 00:16:30.632 "superblock": true, 00:16:30.632 "num_base_bdevs": 3, 00:16:30.632 "num_base_bdevs_discovered": 3, 00:16:30.632 "num_base_bdevs_operational": 3, 00:16:30.632 "process": { 00:16:30.632 "type": "rebuild", 00:16:30.632 "target": "spare", 00:16:30.632 "progress": { 00:16:30.632 "blocks": 20480, 00:16:30.632 "percent": 16 00:16:30.632 } 00:16:30.632 }, 00:16:30.632 "base_bdevs_list": [ 00:16:30.632 { 00:16:30.632 "name": "spare", 00:16:30.632 "uuid": "0608c717-10aa-5b37-b04b-cd71f45309ef", 00:16:30.632 "is_configured": true, 00:16:30.632 "data_offset": 2048, 00:16:30.632 "data_size": 63488 00:16:30.632 }, 00:16:30.632 { 00:16:30.632 "name": "BaseBdev2", 00:16:30.632 "uuid": "6d78a4a2-fac9-58a8-ac57-572bf084274c", 00:16:30.632 "is_configured": true, 00:16:30.632 "data_offset": 2048, 00:16:30.632 "data_size": 63488 00:16:30.632 }, 00:16:30.632 { 00:16:30.632 "name": "BaseBdev3", 00:16:30.632 "uuid": "825be5db-0e48-5cf3-8e36-a05a3228a00d", 00:16:30.632 "is_configured": true, 00:16:30.632 "data_offset": 2048, 00:16:30.632 "data_size": 63488 00:16:30.632 } 00:16:30.632 ] 00:16:30.632 }' 00:16:30.632 03:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.632 03:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:30.632 03:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.893 03:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:30.893 03:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:30.893 03:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:30.893 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:30.893 03:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:16:30.893 03:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:16:30.893 03:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=580 00:16:30.893 03:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:30.893 03:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:30.893 03:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:30.893 03:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:30.893 03:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:30.893 03:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:30.893 03:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:30.893 03:19:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.893 03:19:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:30.893 03:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.893 03:19:13 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.893 03:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:30.893 "name": "raid_bdev1", 00:16:30.893 "uuid": "c84a5e2c-3ab9-4945-bd12-81d3b0302b48", 00:16:30.893 "strip_size_kb": 64, 00:16:30.893 "state": "online", 00:16:30.893 "raid_level": "raid5f", 00:16:30.893 "superblock": true, 00:16:30.893 "num_base_bdevs": 3, 00:16:30.893 "num_base_bdevs_discovered": 3, 00:16:30.893 "num_base_bdevs_operational": 3, 00:16:30.893 "process": { 00:16:30.893 "type": "rebuild", 00:16:30.893 "target": "spare", 00:16:30.893 "progress": { 00:16:30.893 "blocks": 22528, 00:16:30.893 "percent": 17 00:16:30.893 } 00:16:30.893 }, 00:16:30.893 "base_bdevs_list": [ 00:16:30.893 { 00:16:30.893 "name": "spare", 00:16:30.893 "uuid": "0608c717-10aa-5b37-b04b-cd71f45309ef", 00:16:30.893 "is_configured": true, 00:16:30.893 "data_offset": 2048, 00:16:30.893 "data_size": 63488 00:16:30.893 }, 00:16:30.893 { 00:16:30.893 "name": "BaseBdev2", 00:16:30.893 "uuid": "6d78a4a2-fac9-58a8-ac57-572bf084274c", 00:16:30.893 "is_configured": true, 00:16:30.893 "data_offset": 2048, 00:16:30.893 "data_size": 63488 00:16:30.893 }, 00:16:30.893 { 00:16:30.893 "name": "BaseBdev3", 00:16:30.893 "uuid": "825be5db-0e48-5cf3-8e36-a05a3228a00d", 00:16:30.893 "is_configured": true, 00:16:30.893 "data_offset": 2048, 00:16:30.893 "data_size": 63488 00:16:30.893 } 00:16:30.893 ] 00:16:30.893 }' 00:16:30.893 03:19:13 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:30.893 03:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:30.893 03:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:30.893 03:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:30.893 03:19:14 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:31.835 03:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:31.835 03:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:31.835 03:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:31.835 03:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:31.835 03:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:31.835 03:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:31.835 03:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:31.835 03:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.835 03:19:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.835 03:19:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:31.835 03:19:15 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.835 03:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:31.835 "name": "raid_bdev1", 00:16:31.835 "uuid": "c84a5e2c-3ab9-4945-bd12-81d3b0302b48", 00:16:31.835 "strip_size_kb": 64, 00:16:31.835 "state": "online", 00:16:31.835 "raid_level": "raid5f", 00:16:31.835 "superblock": true, 00:16:31.835 "num_base_bdevs": 3, 00:16:31.835 "num_base_bdevs_discovered": 3, 00:16:31.835 "num_base_bdevs_operational": 3, 00:16:31.835 "process": { 00:16:31.835 "type": "rebuild", 00:16:31.835 "target": "spare", 00:16:31.835 "progress": { 00:16:31.835 "blocks": 45056, 00:16:31.835 "percent": 35 00:16:31.835 } 00:16:31.835 }, 00:16:31.835 "base_bdevs_list": [ 00:16:31.835 { 00:16:31.835 "name": "spare", 00:16:31.835 "uuid": "0608c717-10aa-5b37-b04b-cd71f45309ef", 00:16:31.835 "is_configured": true, 00:16:31.835 "data_offset": 2048, 00:16:31.835 "data_size": 63488 00:16:31.835 }, 00:16:31.835 { 00:16:31.835 "name": "BaseBdev2", 00:16:31.835 "uuid": "6d78a4a2-fac9-58a8-ac57-572bf084274c", 00:16:31.835 "is_configured": true, 00:16:31.835 "data_offset": 2048, 00:16:31.835 "data_size": 63488 00:16:31.835 }, 00:16:31.835 { 00:16:31.835 "name": "BaseBdev3", 00:16:31.835 "uuid": "825be5db-0e48-5cf3-8e36-a05a3228a00d", 00:16:31.835 "is_configured": true, 00:16:31.835 "data_offset": 2048, 00:16:31.835 "data_size": 63488 00:16:31.835 } 00:16:31.835 ] 00:16:31.835 }' 00:16:31.835 03:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:32.096 03:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:32.096 03:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:32.096 03:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:32.096 03:19:15 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:33.035 03:19:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:33.035 03:19:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:33.035 03:19:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:33.035 03:19:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:33.035 03:19:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:33.035 03:19:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:33.035 03:19:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.035 03:19:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.035 03:19:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.035 03:19:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:33.035 03:19:16 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.035 03:19:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:33.035 "name": "raid_bdev1", 00:16:33.035 "uuid": "c84a5e2c-3ab9-4945-bd12-81d3b0302b48", 00:16:33.035 "strip_size_kb": 64, 00:16:33.035 "state": "online", 00:16:33.035 "raid_level": "raid5f", 00:16:33.035 "superblock": true, 00:16:33.035 "num_base_bdevs": 3, 00:16:33.035 "num_base_bdevs_discovered": 3, 00:16:33.035 "num_base_bdevs_operational": 3, 00:16:33.035 "process": { 00:16:33.035 "type": "rebuild", 00:16:33.036 "target": "spare", 00:16:33.036 "progress": { 00:16:33.036 "blocks": 69632, 00:16:33.036 "percent": 54 00:16:33.036 } 00:16:33.036 }, 00:16:33.036 "base_bdevs_list": [ 00:16:33.036 { 00:16:33.036 "name": "spare", 00:16:33.036 "uuid": "0608c717-10aa-5b37-b04b-cd71f45309ef", 00:16:33.036 "is_configured": true, 00:16:33.036 "data_offset": 2048, 00:16:33.036 "data_size": 63488 00:16:33.036 }, 00:16:33.036 { 00:16:33.036 "name": "BaseBdev2", 00:16:33.036 "uuid": "6d78a4a2-fac9-58a8-ac57-572bf084274c", 00:16:33.036 "is_configured": true, 00:16:33.036 "data_offset": 2048, 00:16:33.036 "data_size": 63488 00:16:33.036 }, 00:16:33.036 { 00:16:33.036 "name": "BaseBdev3", 00:16:33.036 "uuid": "825be5db-0e48-5cf3-8e36-a05a3228a00d", 00:16:33.036 "is_configured": true, 00:16:33.036 "data_offset": 2048, 00:16:33.036 "data_size": 63488 00:16:33.036 } 00:16:33.036 ] 00:16:33.036 }' 00:16:33.036 03:19:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:33.036 03:19:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:33.036 03:19:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:33.296 03:19:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:33.296 03:19:16 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:34.236 03:19:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:34.236 03:19:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:34.236 03:19:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.236 03:19:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:34.236 03:19:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:34.236 03:19:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.236 03:19:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.236 03:19:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.236 03:19:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.236 03:19:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:34.236 03:19:17 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.236 03:19:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:34.236 "name": "raid_bdev1", 00:16:34.236 "uuid": "c84a5e2c-3ab9-4945-bd12-81d3b0302b48", 00:16:34.236 "strip_size_kb": 64, 00:16:34.236 "state": "online", 00:16:34.236 "raid_level": "raid5f", 00:16:34.236 "superblock": true, 00:16:34.236 "num_base_bdevs": 3, 00:16:34.236 "num_base_bdevs_discovered": 3, 00:16:34.236 "num_base_bdevs_operational": 3, 00:16:34.236 "process": { 00:16:34.236 "type": "rebuild", 00:16:34.236 "target": "spare", 00:16:34.236 "progress": { 00:16:34.236 "blocks": 92160, 00:16:34.236 "percent": 72 00:16:34.236 } 00:16:34.236 }, 00:16:34.236 "base_bdevs_list": [ 00:16:34.236 { 00:16:34.236 "name": "spare", 00:16:34.236 "uuid": "0608c717-10aa-5b37-b04b-cd71f45309ef", 00:16:34.236 "is_configured": true, 00:16:34.236 "data_offset": 2048, 00:16:34.236 "data_size": 63488 00:16:34.236 }, 00:16:34.236 { 00:16:34.236 "name": "BaseBdev2", 00:16:34.236 "uuid": "6d78a4a2-fac9-58a8-ac57-572bf084274c", 00:16:34.236 "is_configured": true, 00:16:34.236 "data_offset": 2048, 00:16:34.236 "data_size": 63488 00:16:34.236 }, 00:16:34.236 { 00:16:34.236 "name": "BaseBdev3", 00:16:34.236 "uuid": "825be5db-0e48-5cf3-8e36-a05a3228a00d", 00:16:34.236 "is_configured": true, 00:16:34.236 "data_offset": 2048, 00:16:34.236 "data_size": 63488 00:16:34.236 } 00:16:34.236 ] 00:16:34.236 }' 00:16:34.236 03:19:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:34.236 03:19:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:34.236 03:19:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:34.236 03:19:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:34.236 03:19:17 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:35.619 03:19:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:35.619 03:19:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:35.620 03:19:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.620 03:19:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:35.620 03:19:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:35.620 03:19:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.620 03:19:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.620 03:19:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.620 03:19:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.620 03:19:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:35.620 03:19:18 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.620 03:19:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.620 "name": "raid_bdev1", 00:16:35.620 "uuid": "c84a5e2c-3ab9-4945-bd12-81d3b0302b48", 00:16:35.620 "strip_size_kb": 64, 00:16:35.620 "state": "online", 00:16:35.620 "raid_level": "raid5f", 00:16:35.620 "superblock": true, 00:16:35.620 "num_base_bdevs": 3, 00:16:35.620 "num_base_bdevs_discovered": 3, 00:16:35.620 "num_base_bdevs_operational": 3, 00:16:35.620 "process": { 00:16:35.620 "type": "rebuild", 00:16:35.620 "target": "spare", 00:16:35.620 "progress": { 00:16:35.620 "blocks": 114688, 00:16:35.620 "percent": 90 00:16:35.620 } 00:16:35.620 }, 00:16:35.620 "base_bdevs_list": [ 00:16:35.620 { 00:16:35.620 "name": "spare", 00:16:35.620 "uuid": "0608c717-10aa-5b37-b04b-cd71f45309ef", 00:16:35.620 "is_configured": true, 00:16:35.620 "data_offset": 2048, 00:16:35.620 "data_size": 63488 00:16:35.620 }, 00:16:35.620 { 00:16:35.620 "name": "BaseBdev2", 00:16:35.620 "uuid": "6d78a4a2-fac9-58a8-ac57-572bf084274c", 00:16:35.620 "is_configured": true, 00:16:35.620 "data_offset": 2048, 00:16:35.620 "data_size": 63488 00:16:35.620 }, 00:16:35.620 { 00:16:35.620 "name": "BaseBdev3", 00:16:35.620 "uuid": "825be5db-0e48-5cf3-8e36-a05a3228a00d", 00:16:35.620 "is_configured": true, 00:16:35.620 "data_offset": 2048, 00:16:35.620 "data_size": 63488 00:16:35.620 } 00:16:35.620 ] 00:16:35.620 }' 00:16:35.620 03:19:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.620 03:19:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:35.620 03:19:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.620 03:19:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.620 03:19:18 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:35.879 [2024-10-09 03:19:19.045107] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:35.879 [2024-10-09 03:19:19.045206] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:35.879 [2024-10-09 03:19:19.045318] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:36.450 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:36.450 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:36.450 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.450 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:36.450 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:36.450 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.450 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.450 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.450 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.450 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.450 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.450 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.450 "name": "raid_bdev1", 00:16:36.450 "uuid": "c84a5e2c-3ab9-4945-bd12-81d3b0302b48", 00:16:36.450 "strip_size_kb": 64, 00:16:36.450 "state": "online", 00:16:36.450 "raid_level": "raid5f", 00:16:36.450 "superblock": true, 00:16:36.450 "num_base_bdevs": 3, 00:16:36.450 "num_base_bdevs_discovered": 3, 00:16:36.450 "num_base_bdevs_operational": 3, 00:16:36.450 "base_bdevs_list": [ 00:16:36.450 { 00:16:36.450 "name": "spare", 00:16:36.450 "uuid": "0608c717-10aa-5b37-b04b-cd71f45309ef", 00:16:36.450 "is_configured": true, 00:16:36.450 "data_offset": 2048, 00:16:36.450 "data_size": 63488 00:16:36.450 }, 00:16:36.450 { 00:16:36.450 "name": "BaseBdev2", 00:16:36.450 "uuid": "6d78a4a2-fac9-58a8-ac57-572bf084274c", 00:16:36.450 "is_configured": true, 00:16:36.450 "data_offset": 2048, 00:16:36.450 "data_size": 63488 00:16:36.450 }, 00:16:36.450 { 00:16:36.450 "name": "BaseBdev3", 00:16:36.450 "uuid": "825be5db-0e48-5cf3-8e36-a05a3228a00d", 00:16:36.450 "is_configured": true, 00:16:36.451 "data_offset": 2048, 00:16:36.451 "data_size": 63488 00:16:36.451 } 00:16:36.451 ] 00:16:36.451 }' 00:16:36.451 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.711 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:36.711 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.711 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:36.711 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:36.711 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:36.711 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.711 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:36.711 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:36.711 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.711 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.711 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.711 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.711 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.711 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.711 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:36.711 "name": "raid_bdev1", 00:16:36.711 "uuid": "c84a5e2c-3ab9-4945-bd12-81d3b0302b48", 00:16:36.711 "strip_size_kb": 64, 00:16:36.711 "state": "online", 00:16:36.711 "raid_level": "raid5f", 00:16:36.711 "superblock": true, 00:16:36.711 "num_base_bdevs": 3, 00:16:36.711 "num_base_bdevs_discovered": 3, 00:16:36.711 "num_base_bdevs_operational": 3, 00:16:36.711 "base_bdevs_list": [ 00:16:36.711 { 00:16:36.711 "name": "spare", 00:16:36.711 "uuid": "0608c717-10aa-5b37-b04b-cd71f45309ef", 00:16:36.711 "is_configured": true, 00:16:36.711 "data_offset": 2048, 00:16:36.711 "data_size": 63488 00:16:36.711 }, 00:16:36.711 { 00:16:36.711 "name": "BaseBdev2", 00:16:36.711 "uuid": "6d78a4a2-fac9-58a8-ac57-572bf084274c", 00:16:36.711 "is_configured": true, 00:16:36.711 "data_offset": 2048, 00:16:36.711 "data_size": 63488 00:16:36.711 }, 00:16:36.711 { 00:16:36.711 "name": "BaseBdev3", 00:16:36.711 "uuid": "825be5db-0e48-5cf3-8e36-a05a3228a00d", 00:16:36.711 "is_configured": true, 00:16:36.711 "data_offset": 2048, 00:16:36.711 "data_size": 63488 00:16:36.711 } 00:16:36.711 ] 00:16:36.711 }' 00:16:36.711 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:36.711 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:36.711 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:36.711 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:36.711 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:36.711 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:36.711 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:36.711 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:36.711 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:36.711 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:36.711 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:36.711 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:36.711 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:36.711 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:36.711 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.711 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.711 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.711 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:36.711 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.711 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:36.711 "name": "raid_bdev1", 00:16:36.711 "uuid": "c84a5e2c-3ab9-4945-bd12-81d3b0302b48", 00:16:36.711 "strip_size_kb": 64, 00:16:36.711 "state": "online", 00:16:36.711 "raid_level": "raid5f", 00:16:36.711 "superblock": true, 00:16:36.711 "num_base_bdevs": 3, 00:16:36.711 "num_base_bdevs_discovered": 3, 00:16:36.711 "num_base_bdevs_operational": 3, 00:16:36.711 "base_bdevs_list": [ 00:16:36.711 { 00:16:36.711 "name": "spare", 00:16:36.711 "uuid": "0608c717-10aa-5b37-b04b-cd71f45309ef", 00:16:36.711 "is_configured": true, 00:16:36.711 "data_offset": 2048, 00:16:36.711 "data_size": 63488 00:16:36.711 }, 00:16:36.711 { 00:16:36.711 "name": "BaseBdev2", 00:16:36.711 "uuid": "6d78a4a2-fac9-58a8-ac57-572bf084274c", 00:16:36.711 "is_configured": true, 00:16:36.711 "data_offset": 2048, 00:16:36.711 "data_size": 63488 00:16:36.711 }, 00:16:36.711 { 00:16:36.711 "name": "BaseBdev3", 00:16:36.711 "uuid": "825be5db-0e48-5cf3-8e36-a05a3228a00d", 00:16:36.711 "is_configured": true, 00:16:36.711 "data_offset": 2048, 00:16:36.711 "data_size": 63488 00:16:36.711 } 00:16:36.711 ] 00:16:36.711 }' 00:16:36.711 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:36.711 03:19:19 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.282 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:37.282 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.282 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.282 [2024-10-09 03:19:20.332953] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:37.282 [2024-10-09 03:19:20.332997] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:37.282 [2024-10-09 03:19:20.333091] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:37.282 [2024-10-09 03:19:20.333192] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:37.282 [2024-10-09 03:19:20.333214] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:37.282 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.282 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:37.282 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.282 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.282 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:37.282 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.282 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:37.282 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:37.282 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:37.282 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:37.282 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:37.282 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:37.282 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:37.282 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:37.282 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:37.282 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:37.282 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:37.282 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:37.282 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:37.543 /dev/nbd0 00:16:37.543 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:37.543 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:37.543 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:16:37.543 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:37.543 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:37.543 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:37.543 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:16:37.543 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:37.543 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:37.543 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:37.543 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:37.543 1+0 records in 00:16:37.543 1+0 records out 00:16:37.543 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000220783 s, 18.6 MB/s 00:16:37.543 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:37.543 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:37.543 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:37.543 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:37.543 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:37.543 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:37.543 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:37.543 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:37.804 /dev/nbd1 00:16:37.804 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:37.804 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:37.804 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:16:37.804 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:16:37.804 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:16:37.804 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:16:37.804 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:16:37.804 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:16:37.804 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:16:37.804 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:16:37.804 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:37.804 1+0 records in 00:16:37.804 1+0 records out 00:16:37.804 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375706 s, 10.9 MB/s 00:16:37.804 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:37.804 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:16:37.804 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:37.804 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:16:37.804 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:16:37.804 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:37.804 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:37.804 03:19:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:37.804 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:37.804 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:37.804 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:37.804 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:37.804 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:37.804 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:37.804 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:38.065 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:38.065 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:38.065 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:38.065 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:38.065 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:38.065 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:38.065 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:38.065 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:38.065 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:38.065 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:38.325 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:38.325 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:38.325 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:38.325 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:38.325 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:38.325 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:38.325 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:38.325 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:38.325 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:38.325 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:38.325 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.325 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.325 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.325 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:38.325 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.325 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.325 [2024-10-09 03:19:21.516261] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:38.325 [2024-10-09 03:19:21.516331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:38.325 [2024-10-09 03:19:21.516354] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:16:38.325 [2024-10-09 03:19:21.516365] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:38.325 [2024-10-09 03:19:21.518801] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:38.325 [2024-10-09 03:19:21.518850] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:38.326 [2024-10-09 03:19:21.518938] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:38.326 [2024-10-09 03:19:21.519010] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:38.326 [2024-10-09 03:19:21.519140] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:38.326 [2024-10-09 03:19:21.519241] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:38.326 spare 00:16:38.326 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.326 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:38.326 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.326 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.326 [2024-10-09 03:19:21.619136] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:38.326 [2024-10-09 03:19:21.619166] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:16:38.326 [2024-10-09 03:19:21.619461] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:16:38.326 [2024-10-09 03:19:21.624382] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:38.326 [2024-10-09 03:19:21.624408] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:38.326 [2024-10-09 03:19:21.624586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:38.586 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.586 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:16:38.586 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:38.586 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:38.586 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:38.586 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:38.586 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:38.586 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:38.586 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:38.586 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:38.586 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:38.586 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.586 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.586 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.586 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.586 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.586 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:38.586 "name": "raid_bdev1", 00:16:38.586 "uuid": "c84a5e2c-3ab9-4945-bd12-81d3b0302b48", 00:16:38.586 "strip_size_kb": 64, 00:16:38.586 "state": "online", 00:16:38.586 "raid_level": "raid5f", 00:16:38.586 "superblock": true, 00:16:38.586 "num_base_bdevs": 3, 00:16:38.586 "num_base_bdevs_discovered": 3, 00:16:38.586 "num_base_bdevs_operational": 3, 00:16:38.586 "base_bdevs_list": [ 00:16:38.586 { 00:16:38.586 "name": "spare", 00:16:38.586 "uuid": "0608c717-10aa-5b37-b04b-cd71f45309ef", 00:16:38.586 "is_configured": true, 00:16:38.586 "data_offset": 2048, 00:16:38.586 "data_size": 63488 00:16:38.586 }, 00:16:38.586 { 00:16:38.586 "name": "BaseBdev2", 00:16:38.586 "uuid": "6d78a4a2-fac9-58a8-ac57-572bf084274c", 00:16:38.586 "is_configured": true, 00:16:38.586 "data_offset": 2048, 00:16:38.586 "data_size": 63488 00:16:38.586 }, 00:16:38.586 { 00:16:38.586 "name": "BaseBdev3", 00:16:38.586 "uuid": "825be5db-0e48-5cf3-8e36-a05a3228a00d", 00:16:38.586 "is_configured": true, 00:16:38.586 "data_offset": 2048, 00:16:38.586 "data_size": 63488 00:16:38.586 } 00:16:38.586 ] 00:16:38.586 }' 00:16:38.586 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:38.586 03:19:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.846 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:38.846 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.846 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:38.846 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:38.846 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.846 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.846 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.846 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.846 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:38.846 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.107 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:39.107 "name": "raid_bdev1", 00:16:39.107 "uuid": "c84a5e2c-3ab9-4945-bd12-81d3b0302b48", 00:16:39.107 "strip_size_kb": 64, 00:16:39.107 "state": "online", 00:16:39.107 "raid_level": "raid5f", 00:16:39.107 "superblock": true, 00:16:39.107 "num_base_bdevs": 3, 00:16:39.107 "num_base_bdevs_discovered": 3, 00:16:39.107 "num_base_bdevs_operational": 3, 00:16:39.107 "base_bdevs_list": [ 00:16:39.107 { 00:16:39.107 "name": "spare", 00:16:39.107 "uuid": "0608c717-10aa-5b37-b04b-cd71f45309ef", 00:16:39.107 "is_configured": true, 00:16:39.107 "data_offset": 2048, 00:16:39.107 "data_size": 63488 00:16:39.107 }, 00:16:39.107 { 00:16:39.107 "name": "BaseBdev2", 00:16:39.107 "uuid": "6d78a4a2-fac9-58a8-ac57-572bf084274c", 00:16:39.107 "is_configured": true, 00:16:39.107 "data_offset": 2048, 00:16:39.107 "data_size": 63488 00:16:39.107 }, 00:16:39.107 { 00:16:39.107 "name": "BaseBdev3", 00:16:39.107 "uuid": "825be5db-0e48-5cf3-8e36-a05a3228a00d", 00:16:39.107 "is_configured": true, 00:16:39.107 "data_offset": 2048, 00:16:39.107 "data_size": 63488 00:16:39.107 } 00:16:39.107 ] 00:16:39.107 }' 00:16:39.107 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.107 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:39.107 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.107 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:39.107 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:39.107 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.107 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.107 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.107 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.107 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:39.107 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:39.107 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.107 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.107 [2024-10-09 03:19:22.293986] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:39.107 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.108 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:39.108 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:39.108 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:39.108 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:39.108 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:39.108 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:39.108 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:39.108 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:39.108 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:39.108 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:39.108 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.108 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.108 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.108 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.108 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.108 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:39.108 "name": "raid_bdev1", 00:16:39.108 "uuid": "c84a5e2c-3ab9-4945-bd12-81d3b0302b48", 00:16:39.108 "strip_size_kb": 64, 00:16:39.108 "state": "online", 00:16:39.108 "raid_level": "raid5f", 00:16:39.108 "superblock": true, 00:16:39.108 "num_base_bdevs": 3, 00:16:39.108 "num_base_bdevs_discovered": 2, 00:16:39.108 "num_base_bdevs_operational": 2, 00:16:39.108 "base_bdevs_list": [ 00:16:39.108 { 00:16:39.108 "name": null, 00:16:39.108 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:39.108 "is_configured": false, 00:16:39.108 "data_offset": 0, 00:16:39.108 "data_size": 63488 00:16:39.108 }, 00:16:39.108 { 00:16:39.108 "name": "BaseBdev2", 00:16:39.108 "uuid": "6d78a4a2-fac9-58a8-ac57-572bf084274c", 00:16:39.108 "is_configured": true, 00:16:39.108 "data_offset": 2048, 00:16:39.108 "data_size": 63488 00:16:39.108 }, 00:16:39.108 { 00:16:39.108 "name": "BaseBdev3", 00:16:39.108 "uuid": "825be5db-0e48-5cf3-8e36-a05a3228a00d", 00:16:39.108 "is_configured": true, 00:16:39.108 "data_offset": 2048, 00:16:39.108 "data_size": 63488 00:16:39.108 } 00:16:39.108 ] 00:16:39.108 }' 00:16:39.108 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:39.108 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.679 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:39.679 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.679 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:39.679 [2024-10-09 03:19:22.761381] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:39.679 [2024-10-09 03:19:22.761524] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:39.679 [2024-10-09 03:19:22.761542] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:39.679 [2024-10-09 03:19:22.761576] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:39.679 [2024-10-09 03:19:22.775354] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:16:39.679 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.679 03:19:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:39.679 [2024-10-09 03:19:22.782446] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:40.620 03:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:40.620 03:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:40.620 03:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:40.620 03:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:40.620 03:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:40.620 03:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.620 03:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.620 03:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.620 03:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.620 03:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.620 03:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.620 "name": "raid_bdev1", 00:16:40.620 "uuid": "c84a5e2c-3ab9-4945-bd12-81d3b0302b48", 00:16:40.620 "strip_size_kb": 64, 00:16:40.620 "state": "online", 00:16:40.620 "raid_level": "raid5f", 00:16:40.620 "superblock": true, 00:16:40.620 "num_base_bdevs": 3, 00:16:40.620 "num_base_bdevs_discovered": 3, 00:16:40.620 "num_base_bdevs_operational": 3, 00:16:40.620 "process": { 00:16:40.620 "type": "rebuild", 00:16:40.620 "target": "spare", 00:16:40.620 "progress": { 00:16:40.620 "blocks": 20480, 00:16:40.620 "percent": 16 00:16:40.620 } 00:16:40.620 }, 00:16:40.620 "base_bdevs_list": [ 00:16:40.620 { 00:16:40.620 "name": "spare", 00:16:40.620 "uuid": "0608c717-10aa-5b37-b04b-cd71f45309ef", 00:16:40.620 "is_configured": true, 00:16:40.620 "data_offset": 2048, 00:16:40.620 "data_size": 63488 00:16:40.620 }, 00:16:40.620 { 00:16:40.620 "name": "BaseBdev2", 00:16:40.620 "uuid": "6d78a4a2-fac9-58a8-ac57-572bf084274c", 00:16:40.620 "is_configured": true, 00:16:40.620 "data_offset": 2048, 00:16:40.620 "data_size": 63488 00:16:40.620 }, 00:16:40.620 { 00:16:40.620 "name": "BaseBdev3", 00:16:40.620 "uuid": "825be5db-0e48-5cf3-8e36-a05a3228a00d", 00:16:40.620 "is_configured": true, 00:16:40.620 "data_offset": 2048, 00:16:40.620 "data_size": 63488 00:16:40.620 } 00:16:40.620 ] 00:16:40.620 }' 00:16:40.620 03:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.620 03:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:40.620 03:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.620 03:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:40.620 03:19:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:40.620 03:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.620 03:19:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.620 [2024-10-09 03:19:23.909072] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:40.881 [2024-10-09 03:19:23.991566] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:40.881 [2024-10-09 03:19:23.991632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.881 [2024-10-09 03:19:23.991648] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:40.881 [2024-10-09 03:19:23.991659] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:40.881 03:19:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.881 03:19:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:40.881 03:19:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:40.881 03:19:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:40.881 03:19:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:40.881 03:19:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:40.881 03:19:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:40.881 03:19:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:40.881 03:19:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:40.881 03:19:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:40.881 03:19:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:40.881 03:19:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:40.881 03:19:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:40.881 03:19:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.881 03:19:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:40.881 03:19:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.881 03:19:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:40.881 "name": "raid_bdev1", 00:16:40.881 "uuid": "c84a5e2c-3ab9-4945-bd12-81d3b0302b48", 00:16:40.881 "strip_size_kb": 64, 00:16:40.881 "state": "online", 00:16:40.881 "raid_level": "raid5f", 00:16:40.881 "superblock": true, 00:16:40.881 "num_base_bdevs": 3, 00:16:40.881 "num_base_bdevs_discovered": 2, 00:16:40.881 "num_base_bdevs_operational": 2, 00:16:40.881 "base_bdevs_list": [ 00:16:40.881 { 00:16:40.881 "name": null, 00:16:40.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.881 "is_configured": false, 00:16:40.881 "data_offset": 0, 00:16:40.881 "data_size": 63488 00:16:40.881 }, 00:16:40.881 { 00:16:40.881 "name": "BaseBdev2", 00:16:40.881 "uuid": "6d78a4a2-fac9-58a8-ac57-572bf084274c", 00:16:40.881 "is_configured": true, 00:16:40.881 "data_offset": 2048, 00:16:40.881 "data_size": 63488 00:16:40.881 }, 00:16:40.881 { 00:16:40.881 "name": "BaseBdev3", 00:16:40.881 "uuid": "825be5db-0e48-5cf3-8e36-a05a3228a00d", 00:16:40.881 "is_configured": true, 00:16:40.881 "data_offset": 2048, 00:16:40.881 "data_size": 63488 00:16:40.881 } 00:16:40.881 ] 00:16:40.881 }' 00:16:40.881 03:19:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:40.881 03:19:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.452 03:19:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:41.452 03:19:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.452 03:19:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:41.452 [2024-10-09 03:19:24.480915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:41.452 [2024-10-09 03:19:24.480985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:41.452 [2024-10-09 03:19:24.481007] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:16:41.452 [2024-10-09 03:19:24.481023] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:41.452 [2024-10-09 03:19:24.481548] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:41.452 [2024-10-09 03:19:24.481577] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:41.452 [2024-10-09 03:19:24.481666] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:41.452 [2024-10-09 03:19:24.481684] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:16:41.452 [2024-10-09 03:19:24.481695] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:41.452 [2024-10-09 03:19:24.481717] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:41.452 [2024-10-09 03:19:24.495268] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:16:41.452 spare 00:16:41.452 03:19:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.452 03:19:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:41.452 [2024-10-09 03:19:24.502006] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:42.393 03:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:42.393 03:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.393 03:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:42.393 03:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:42.393 03:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.393 03:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.393 03:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.393 03:19:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.393 03:19:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.393 03:19:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.393 03:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.393 "name": "raid_bdev1", 00:16:42.393 "uuid": "c84a5e2c-3ab9-4945-bd12-81d3b0302b48", 00:16:42.393 "strip_size_kb": 64, 00:16:42.393 "state": "online", 00:16:42.393 "raid_level": "raid5f", 00:16:42.393 "superblock": true, 00:16:42.393 "num_base_bdevs": 3, 00:16:42.393 "num_base_bdevs_discovered": 3, 00:16:42.393 "num_base_bdevs_operational": 3, 00:16:42.393 "process": { 00:16:42.393 "type": "rebuild", 00:16:42.393 "target": "spare", 00:16:42.393 "progress": { 00:16:42.393 "blocks": 20480, 00:16:42.393 "percent": 16 00:16:42.393 } 00:16:42.393 }, 00:16:42.393 "base_bdevs_list": [ 00:16:42.393 { 00:16:42.393 "name": "spare", 00:16:42.393 "uuid": "0608c717-10aa-5b37-b04b-cd71f45309ef", 00:16:42.393 "is_configured": true, 00:16:42.393 "data_offset": 2048, 00:16:42.393 "data_size": 63488 00:16:42.393 }, 00:16:42.393 { 00:16:42.393 "name": "BaseBdev2", 00:16:42.393 "uuid": "6d78a4a2-fac9-58a8-ac57-572bf084274c", 00:16:42.393 "is_configured": true, 00:16:42.393 "data_offset": 2048, 00:16:42.393 "data_size": 63488 00:16:42.393 }, 00:16:42.393 { 00:16:42.393 "name": "BaseBdev3", 00:16:42.393 "uuid": "825be5db-0e48-5cf3-8e36-a05a3228a00d", 00:16:42.393 "is_configured": true, 00:16:42.393 "data_offset": 2048, 00:16:42.393 "data_size": 63488 00:16:42.393 } 00:16:42.393 ] 00:16:42.393 }' 00:16:42.393 03:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:42.393 03:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:42.394 03:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:42.394 03:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:42.394 03:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:42.394 03:19:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.394 03:19:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.394 [2024-10-09 03:19:25.633227] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:42.653 [2024-10-09 03:19:25.711009] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:42.653 [2024-10-09 03:19:25.711064] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.653 [2024-10-09 03:19:25.711082] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:42.653 [2024-10-09 03:19:25.711090] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:42.653 03:19:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.653 03:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:42.653 03:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:42.653 03:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:42.653 03:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:42.653 03:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:42.653 03:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:42.653 03:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:42.653 03:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:42.653 03:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:42.653 03:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:42.653 03:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.653 03:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.653 03:19:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.653 03:19:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.653 03:19:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.653 03:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:42.653 "name": "raid_bdev1", 00:16:42.653 "uuid": "c84a5e2c-3ab9-4945-bd12-81d3b0302b48", 00:16:42.653 "strip_size_kb": 64, 00:16:42.653 "state": "online", 00:16:42.653 "raid_level": "raid5f", 00:16:42.653 "superblock": true, 00:16:42.653 "num_base_bdevs": 3, 00:16:42.653 "num_base_bdevs_discovered": 2, 00:16:42.653 "num_base_bdevs_operational": 2, 00:16:42.653 "base_bdevs_list": [ 00:16:42.653 { 00:16:42.653 "name": null, 00:16:42.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.653 "is_configured": false, 00:16:42.653 "data_offset": 0, 00:16:42.653 "data_size": 63488 00:16:42.653 }, 00:16:42.653 { 00:16:42.653 "name": "BaseBdev2", 00:16:42.653 "uuid": "6d78a4a2-fac9-58a8-ac57-572bf084274c", 00:16:42.653 "is_configured": true, 00:16:42.653 "data_offset": 2048, 00:16:42.653 "data_size": 63488 00:16:42.653 }, 00:16:42.653 { 00:16:42.653 "name": "BaseBdev3", 00:16:42.653 "uuid": "825be5db-0e48-5cf3-8e36-a05a3228a00d", 00:16:42.653 "is_configured": true, 00:16:42.653 "data_offset": 2048, 00:16:42.653 "data_size": 63488 00:16:42.653 } 00:16:42.653 ] 00:16:42.653 }' 00:16:42.653 03:19:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:42.653 03:19:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.912 03:19:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:42.912 03:19:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:42.912 03:19:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:42.912 03:19:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:42.912 03:19:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:42.912 03:19:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.912 03:19:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.912 03:19:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:42.912 03:19:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:42.912 03:19:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.912 03:19:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:42.912 "name": "raid_bdev1", 00:16:42.912 "uuid": "c84a5e2c-3ab9-4945-bd12-81d3b0302b48", 00:16:42.912 "strip_size_kb": 64, 00:16:42.912 "state": "online", 00:16:42.912 "raid_level": "raid5f", 00:16:42.912 "superblock": true, 00:16:42.912 "num_base_bdevs": 3, 00:16:42.912 "num_base_bdevs_discovered": 2, 00:16:42.912 "num_base_bdevs_operational": 2, 00:16:42.912 "base_bdevs_list": [ 00:16:42.912 { 00:16:42.912 "name": null, 00:16:42.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:42.912 "is_configured": false, 00:16:42.912 "data_offset": 0, 00:16:42.912 "data_size": 63488 00:16:42.912 }, 00:16:42.912 { 00:16:42.912 "name": "BaseBdev2", 00:16:42.912 "uuid": "6d78a4a2-fac9-58a8-ac57-572bf084274c", 00:16:42.912 "is_configured": true, 00:16:42.912 "data_offset": 2048, 00:16:42.912 "data_size": 63488 00:16:42.912 }, 00:16:42.912 { 00:16:42.912 "name": "BaseBdev3", 00:16:42.912 "uuid": "825be5db-0e48-5cf3-8e36-a05a3228a00d", 00:16:42.912 "is_configured": true, 00:16:42.912 "data_offset": 2048, 00:16:42.912 "data_size": 63488 00:16:42.912 } 00:16:42.912 ] 00:16:42.912 }' 00:16:42.912 03:19:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:43.171 03:19:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:43.171 03:19:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:43.171 03:19:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:43.171 03:19:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:43.171 03:19:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.171 03:19:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.171 03:19:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.171 03:19:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:43.171 03:19:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.171 03:19:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:43.171 [2024-10-09 03:19:26.281248] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:43.171 [2024-10-09 03:19:26.281306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.171 [2024-10-09 03:19:26.281333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:43.171 [2024-10-09 03:19:26.281343] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.171 [2024-10-09 03:19:26.281877] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.171 [2024-10-09 03:19:26.281902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:43.171 [2024-10-09 03:19:26.281989] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:43.171 [2024-10-09 03:19:26.282005] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:43.171 [2024-10-09 03:19:26.282020] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:43.171 [2024-10-09 03:19:26.282036] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:43.171 BaseBdev1 00:16:43.171 03:19:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.171 03:19:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:44.112 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:44.112 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.112 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.112 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:44.112 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:44.112 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:44.112 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.112 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.112 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.112 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.112 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.112 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.112 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.112 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.112 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.112 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.112 "name": "raid_bdev1", 00:16:44.112 "uuid": "c84a5e2c-3ab9-4945-bd12-81d3b0302b48", 00:16:44.112 "strip_size_kb": 64, 00:16:44.112 "state": "online", 00:16:44.112 "raid_level": "raid5f", 00:16:44.112 "superblock": true, 00:16:44.112 "num_base_bdevs": 3, 00:16:44.112 "num_base_bdevs_discovered": 2, 00:16:44.112 "num_base_bdevs_operational": 2, 00:16:44.112 "base_bdevs_list": [ 00:16:44.112 { 00:16:44.112 "name": null, 00:16:44.112 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.112 "is_configured": false, 00:16:44.112 "data_offset": 0, 00:16:44.112 "data_size": 63488 00:16:44.112 }, 00:16:44.112 { 00:16:44.112 "name": "BaseBdev2", 00:16:44.112 "uuid": "6d78a4a2-fac9-58a8-ac57-572bf084274c", 00:16:44.112 "is_configured": true, 00:16:44.112 "data_offset": 2048, 00:16:44.112 "data_size": 63488 00:16:44.112 }, 00:16:44.112 { 00:16:44.112 "name": "BaseBdev3", 00:16:44.112 "uuid": "825be5db-0e48-5cf3-8e36-a05a3228a00d", 00:16:44.112 "is_configured": true, 00:16:44.112 "data_offset": 2048, 00:16:44.112 "data_size": 63488 00:16:44.112 } 00:16:44.112 ] 00:16:44.112 }' 00:16:44.112 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.112 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.682 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:44.682 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:44.682 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:44.682 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:44.682 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:44.682 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.682 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.682 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.682 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.682 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.682 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:44.682 "name": "raid_bdev1", 00:16:44.682 "uuid": "c84a5e2c-3ab9-4945-bd12-81d3b0302b48", 00:16:44.682 "strip_size_kb": 64, 00:16:44.682 "state": "online", 00:16:44.682 "raid_level": "raid5f", 00:16:44.682 "superblock": true, 00:16:44.682 "num_base_bdevs": 3, 00:16:44.682 "num_base_bdevs_discovered": 2, 00:16:44.682 "num_base_bdevs_operational": 2, 00:16:44.682 "base_bdevs_list": [ 00:16:44.682 { 00:16:44.682 "name": null, 00:16:44.682 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.682 "is_configured": false, 00:16:44.682 "data_offset": 0, 00:16:44.682 "data_size": 63488 00:16:44.682 }, 00:16:44.682 { 00:16:44.682 "name": "BaseBdev2", 00:16:44.682 "uuid": "6d78a4a2-fac9-58a8-ac57-572bf084274c", 00:16:44.682 "is_configured": true, 00:16:44.682 "data_offset": 2048, 00:16:44.682 "data_size": 63488 00:16:44.682 }, 00:16:44.682 { 00:16:44.682 "name": "BaseBdev3", 00:16:44.683 "uuid": "825be5db-0e48-5cf3-8e36-a05a3228a00d", 00:16:44.683 "is_configured": true, 00:16:44.683 "data_offset": 2048, 00:16:44.683 "data_size": 63488 00:16:44.683 } 00:16:44.683 ] 00:16:44.683 }' 00:16:44.683 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:44.683 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:44.683 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:44.683 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:44.683 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:44.683 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:16:44.683 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:44.683 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:16:44.683 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:44.683 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:16:44.683 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:44.683 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:44.683 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.683 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:44.683 [2024-10-09 03:19:27.839770] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:44.683 [2024-10-09 03:19:27.839919] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:16:44.683 [2024-10-09 03:19:27.839936] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:44.683 request: 00:16:44.683 { 00:16:44.683 "base_bdev": "BaseBdev1", 00:16:44.683 "raid_bdev": "raid_bdev1", 00:16:44.683 "method": "bdev_raid_add_base_bdev", 00:16:44.683 "req_id": 1 00:16:44.683 } 00:16:44.683 Got JSON-RPC error response 00:16:44.683 response: 00:16:44.683 { 00:16:44.683 "code": -22, 00:16:44.683 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:44.683 } 00:16:44.683 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:16:44.683 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:16:44.683 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:44.683 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:44.683 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:44.683 03:19:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:45.653 03:19:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:16:45.653 03:19:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.653 03:19:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.653 03:19:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:45.653 03:19:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:45.653 03:19:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:45.653 03:19:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.653 03:19:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.653 03:19:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.653 03:19:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.653 03:19:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.653 03:19:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:45.653 03:19:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:45.653 03:19:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.653 03:19:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:45.653 03:19:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.653 "name": "raid_bdev1", 00:16:45.653 "uuid": "c84a5e2c-3ab9-4945-bd12-81d3b0302b48", 00:16:45.653 "strip_size_kb": 64, 00:16:45.653 "state": "online", 00:16:45.653 "raid_level": "raid5f", 00:16:45.653 "superblock": true, 00:16:45.653 "num_base_bdevs": 3, 00:16:45.653 "num_base_bdevs_discovered": 2, 00:16:45.653 "num_base_bdevs_operational": 2, 00:16:45.653 "base_bdevs_list": [ 00:16:45.653 { 00:16:45.653 "name": null, 00:16:45.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.653 "is_configured": false, 00:16:45.653 "data_offset": 0, 00:16:45.653 "data_size": 63488 00:16:45.653 }, 00:16:45.653 { 00:16:45.653 "name": "BaseBdev2", 00:16:45.653 "uuid": "6d78a4a2-fac9-58a8-ac57-572bf084274c", 00:16:45.653 "is_configured": true, 00:16:45.653 "data_offset": 2048, 00:16:45.653 "data_size": 63488 00:16:45.653 }, 00:16:45.653 { 00:16:45.653 "name": "BaseBdev3", 00:16:45.653 "uuid": "825be5db-0e48-5cf3-8e36-a05a3228a00d", 00:16:45.653 "is_configured": true, 00:16:45.653 "data_offset": 2048, 00:16:45.653 "data_size": 63488 00:16:45.653 } 00:16:45.653 ] 00:16:45.653 }' 00:16:45.653 03:19:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.653 03:19:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.239 03:19:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:46.239 03:19:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.240 03:19:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:46.240 03:19:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:46.240 03:19:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.240 03:19:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.240 03:19:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.240 03:19:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:46.240 03:19:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.240 03:19:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.240 03:19:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:46.240 "name": "raid_bdev1", 00:16:46.240 "uuid": "c84a5e2c-3ab9-4945-bd12-81d3b0302b48", 00:16:46.240 "strip_size_kb": 64, 00:16:46.240 "state": "online", 00:16:46.240 "raid_level": "raid5f", 00:16:46.240 "superblock": true, 00:16:46.240 "num_base_bdevs": 3, 00:16:46.240 "num_base_bdevs_discovered": 2, 00:16:46.240 "num_base_bdevs_operational": 2, 00:16:46.240 "base_bdevs_list": [ 00:16:46.240 { 00:16:46.240 "name": null, 00:16:46.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:46.240 "is_configured": false, 00:16:46.240 "data_offset": 0, 00:16:46.240 "data_size": 63488 00:16:46.240 }, 00:16:46.240 { 00:16:46.240 "name": "BaseBdev2", 00:16:46.240 "uuid": "6d78a4a2-fac9-58a8-ac57-572bf084274c", 00:16:46.240 "is_configured": true, 00:16:46.240 "data_offset": 2048, 00:16:46.240 "data_size": 63488 00:16:46.240 }, 00:16:46.240 { 00:16:46.240 "name": "BaseBdev3", 00:16:46.240 "uuid": "825be5db-0e48-5cf3-8e36-a05a3228a00d", 00:16:46.240 "is_configured": true, 00:16:46.240 "data_offset": 2048, 00:16:46.240 "data_size": 63488 00:16:46.240 } 00:16:46.240 ] 00:16:46.240 }' 00:16:46.240 03:19:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:46.240 03:19:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:46.240 03:19:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:46.240 03:19:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:46.240 03:19:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82294 00:16:46.240 03:19:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 82294 ']' 00:16:46.240 03:19:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 82294 00:16:46.240 03:19:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:16:46.240 03:19:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:46.240 03:19:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 82294 00:16:46.240 03:19:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:46.240 03:19:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:46.240 killing process with pid 82294 00:16:46.240 03:19:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 82294' 00:16:46.240 03:19:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 82294 00:16:46.240 Received shutdown signal, test time was about 60.000000 seconds 00:16:46.240 00:16:46.240 Latency(us) 00:16:46.240 [2024-10-09T03:19:29.543Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:46.240 [2024-10-09T03:19:29.543Z] =================================================================================================================== 00:16:46.240 [2024-10-09T03:19:29.543Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:46.240 [2024-10-09 03:19:29.457697] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:46.240 [2024-10-09 03:19:29.457828] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:46.240 03:19:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 82294 00:16:46.240 [2024-10-09 03:19:29.457909] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:46.240 [2024-10-09 03:19:29.457922] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:46.811 [2024-10-09 03:19:29.874882] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:48.194 03:19:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:48.194 00:16:48.194 real 0m23.326s 00:16:48.194 user 0m29.634s 00:16:48.194 sys 0m2.713s 00:16:48.194 03:19:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:48.194 03:19:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:48.194 ************************************ 00:16:48.194 END TEST raid5f_rebuild_test_sb 00:16:48.194 ************************************ 00:16:48.194 03:19:31 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:48.194 03:19:31 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:16:48.194 03:19:31 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:48.194 03:19:31 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:48.194 03:19:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:48.194 ************************************ 00:16:48.194 START TEST raid5f_state_function_test 00:16:48.194 ************************************ 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 false 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83050 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:48.194 Process raid pid: 83050 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83050' 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83050 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@831 -- # '[' -z 83050 ']' 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:48.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:48.194 03:19:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.194 [2024-10-09 03:19:31.383033] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:16:48.194 [2024-10-09 03:19:31.383169] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:48.454 [2024-10-09 03:19:31.551914] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.714 [2024-10-09 03:19:31.804238] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.978 [2024-10-09 03:19:32.040742] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:48.978 [2024-10-09 03:19:32.040814] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:48.978 03:19:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:48.978 03:19:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # return 0 00:16:48.978 03:19:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:48.978 03:19:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.978 03:19:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.978 [2024-10-09 03:19:32.197863] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:48.978 [2024-10-09 03:19:32.197915] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:48.978 [2024-10-09 03:19:32.197927] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:48.978 [2024-10-09 03:19:32.197938] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:48.978 [2024-10-09 03:19:32.197944] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:48.978 [2024-10-09 03:19:32.197953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:48.978 [2024-10-09 03:19:32.197958] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:48.978 [2024-10-09 03:19:32.197967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:48.978 03:19:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.978 03:19:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:48.978 03:19:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:48.978 03:19:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:48.978 03:19:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:48.978 03:19:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:48.978 03:19:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:48.978 03:19:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.978 03:19:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.978 03:19:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.978 03:19:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.978 03:19:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:48.979 03:19:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.979 03:19:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.979 03:19:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.979 03:19:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.979 03:19:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.979 "name": "Existed_Raid", 00:16:48.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.979 "strip_size_kb": 64, 00:16:48.979 "state": "configuring", 00:16:48.979 "raid_level": "raid5f", 00:16:48.979 "superblock": false, 00:16:48.979 "num_base_bdevs": 4, 00:16:48.979 "num_base_bdevs_discovered": 0, 00:16:48.979 "num_base_bdevs_operational": 4, 00:16:48.979 "base_bdevs_list": [ 00:16:48.979 { 00:16:48.979 "name": "BaseBdev1", 00:16:48.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.979 "is_configured": false, 00:16:48.979 "data_offset": 0, 00:16:48.979 "data_size": 0 00:16:48.979 }, 00:16:48.979 { 00:16:48.979 "name": "BaseBdev2", 00:16:48.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.979 "is_configured": false, 00:16:48.979 "data_offset": 0, 00:16:48.979 "data_size": 0 00:16:48.979 }, 00:16:48.979 { 00:16:48.979 "name": "BaseBdev3", 00:16:48.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.979 "is_configured": false, 00:16:48.979 "data_offset": 0, 00:16:48.979 "data_size": 0 00:16:48.979 }, 00:16:48.979 { 00:16:48.979 "name": "BaseBdev4", 00:16:48.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.979 "is_configured": false, 00:16:48.979 "data_offset": 0, 00:16:48.979 "data_size": 0 00:16:48.979 } 00:16:48.979 ] 00:16:48.979 }' 00:16:48.979 03:19:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.979 03:19:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.549 [2024-10-09 03:19:32.660945] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:49.549 [2024-10-09 03:19:32.660988] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.549 [2024-10-09 03:19:32.668975] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:49.549 [2024-10-09 03:19:32.669008] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:49.549 [2024-10-09 03:19:32.669016] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:49.549 [2024-10-09 03:19:32.669024] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:49.549 [2024-10-09 03:19:32.669030] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:49.549 [2024-10-09 03:19:32.669038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:49.549 [2024-10-09 03:19:32.669044] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:49.549 [2024-10-09 03:19:32.669052] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.549 [2024-10-09 03:19:32.748715] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:49.549 BaseBdev1 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.549 [ 00:16:49.549 { 00:16:49.549 "name": "BaseBdev1", 00:16:49.549 "aliases": [ 00:16:49.549 "2678ab60-4bd8-46fc-abec-b3f96314266b" 00:16:49.549 ], 00:16:49.549 "product_name": "Malloc disk", 00:16:49.549 "block_size": 512, 00:16:49.549 "num_blocks": 65536, 00:16:49.549 "uuid": "2678ab60-4bd8-46fc-abec-b3f96314266b", 00:16:49.549 "assigned_rate_limits": { 00:16:49.549 "rw_ios_per_sec": 0, 00:16:49.549 "rw_mbytes_per_sec": 0, 00:16:49.549 "r_mbytes_per_sec": 0, 00:16:49.549 "w_mbytes_per_sec": 0 00:16:49.549 }, 00:16:49.549 "claimed": true, 00:16:49.549 "claim_type": "exclusive_write", 00:16:49.549 "zoned": false, 00:16:49.549 "supported_io_types": { 00:16:49.549 "read": true, 00:16:49.549 "write": true, 00:16:49.549 "unmap": true, 00:16:49.549 "flush": true, 00:16:49.549 "reset": true, 00:16:49.549 "nvme_admin": false, 00:16:49.549 "nvme_io": false, 00:16:49.549 "nvme_io_md": false, 00:16:49.549 "write_zeroes": true, 00:16:49.549 "zcopy": true, 00:16:49.549 "get_zone_info": false, 00:16:49.549 "zone_management": false, 00:16:49.549 "zone_append": false, 00:16:49.549 "compare": false, 00:16:49.549 "compare_and_write": false, 00:16:49.549 "abort": true, 00:16:49.549 "seek_hole": false, 00:16:49.549 "seek_data": false, 00:16:49.549 "copy": true, 00:16:49.549 "nvme_iov_md": false 00:16:49.549 }, 00:16:49.549 "memory_domains": [ 00:16:49.549 { 00:16:49.549 "dma_device_id": "system", 00:16:49.549 "dma_device_type": 1 00:16:49.549 }, 00:16:49.549 { 00:16:49.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:49.549 "dma_device_type": 2 00:16:49.549 } 00:16:49.549 ], 00:16:49.549 "driver_specific": {} 00:16:49.549 } 00:16:49.549 ] 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.549 03:19:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:49.549 "name": "Existed_Raid", 00:16:49.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.549 "strip_size_kb": 64, 00:16:49.549 "state": "configuring", 00:16:49.549 "raid_level": "raid5f", 00:16:49.549 "superblock": false, 00:16:49.549 "num_base_bdevs": 4, 00:16:49.549 "num_base_bdevs_discovered": 1, 00:16:49.549 "num_base_bdevs_operational": 4, 00:16:49.550 "base_bdevs_list": [ 00:16:49.550 { 00:16:49.550 "name": "BaseBdev1", 00:16:49.550 "uuid": "2678ab60-4bd8-46fc-abec-b3f96314266b", 00:16:49.550 "is_configured": true, 00:16:49.550 "data_offset": 0, 00:16:49.550 "data_size": 65536 00:16:49.550 }, 00:16:49.550 { 00:16:49.550 "name": "BaseBdev2", 00:16:49.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.550 "is_configured": false, 00:16:49.550 "data_offset": 0, 00:16:49.550 "data_size": 0 00:16:49.550 }, 00:16:49.550 { 00:16:49.550 "name": "BaseBdev3", 00:16:49.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.550 "is_configured": false, 00:16:49.550 "data_offset": 0, 00:16:49.550 "data_size": 0 00:16:49.550 }, 00:16:49.550 { 00:16:49.550 "name": "BaseBdev4", 00:16:49.550 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.550 "is_configured": false, 00:16:49.550 "data_offset": 0, 00:16:49.550 "data_size": 0 00:16:49.550 } 00:16:49.550 ] 00:16:49.550 }' 00:16:49.550 03:19:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:49.550 03:19:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.118 03:19:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:50.118 03:19:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.118 03:19:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.118 [2024-10-09 03:19:33.227938] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:50.118 [2024-10-09 03:19:33.228000] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:50.118 03:19:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.118 03:19:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:50.118 03:19:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.118 03:19:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.118 [2024-10-09 03:19:33.239945] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:50.118 [2024-10-09 03:19:33.242003] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:50.118 [2024-10-09 03:19:33.242044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:50.118 [2024-10-09 03:19:33.242053] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:50.118 [2024-10-09 03:19:33.242063] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:50.118 [2024-10-09 03:19:33.242069] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:16:50.118 [2024-10-09 03:19:33.242078] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:16:50.118 03:19:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.119 03:19:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:50.119 03:19:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:50.119 03:19:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:50.119 03:19:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:50.119 03:19:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:50.119 03:19:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.119 03:19:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.119 03:19:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:50.119 03:19:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.119 03:19:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.119 03:19:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.119 03:19:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.119 03:19:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.119 03:19:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.119 03:19:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.119 03:19:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.119 03:19:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.119 03:19:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.119 "name": "Existed_Raid", 00:16:50.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.119 "strip_size_kb": 64, 00:16:50.119 "state": "configuring", 00:16:50.119 "raid_level": "raid5f", 00:16:50.119 "superblock": false, 00:16:50.119 "num_base_bdevs": 4, 00:16:50.119 "num_base_bdevs_discovered": 1, 00:16:50.119 "num_base_bdevs_operational": 4, 00:16:50.119 "base_bdevs_list": [ 00:16:50.119 { 00:16:50.119 "name": "BaseBdev1", 00:16:50.119 "uuid": "2678ab60-4bd8-46fc-abec-b3f96314266b", 00:16:50.119 "is_configured": true, 00:16:50.119 "data_offset": 0, 00:16:50.119 "data_size": 65536 00:16:50.119 }, 00:16:50.119 { 00:16:50.119 "name": "BaseBdev2", 00:16:50.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.119 "is_configured": false, 00:16:50.119 "data_offset": 0, 00:16:50.119 "data_size": 0 00:16:50.119 }, 00:16:50.119 { 00:16:50.119 "name": "BaseBdev3", 00:16:50.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.119 "is_configured": false, 00:16:50.119 "data_offset": 0, 00:16:50.119 "data_size": 0 00:16:50.119 }, 00:16:50.119 { 00:16:50.119 "name": "BaseBdev4", 00:16:50.119 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.119 "is_configured": false, 00:16:50.119 "data_offset": 0, 00:16:50.119 "data_size": 0 00:16:50.119 } 00:16:50.119 ] 00:16:50.119 }' 00:16:50.119 03:19:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.119 03:19:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.379 03:19:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:50.379 03:19:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.379 03:19:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.379 [2024-10-09 03:19:33.662523] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:50.379 BaseBdev2 00:16:50.379 03:19:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.379 03:19:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:50.379 03:19:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:50.379 03:19:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:50.379 03:19:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:50.379 03:19:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:50.379 03:19:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:50.379 03:19:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:50.379 03:19:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.379 03:19:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.379 03:19:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.379 03:19:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:50.379 03:19:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.379 03:19:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.639 [ 00:16:50.639 { 00:16:50.639 "name": "BaseBdev2", 00:16:50.639 "aliases": [ 00:16:50.639 "ccd94c1f-f107-49ce-8c4c-fc5cfe95a4e0" 00:16:50.639 ], 00:16:50.639 "product_name": "Malloc disk", 00:16:50.639 "block_size": 512, 00:16:50.639 "num_blocks": 65536, 00:16:50.639 "uuid": "ccd94c1f-f107-49ce-8c4c-fc5cfe95a4e0", 00:16:50.639 "assigned_rate_limits": { 00:16:50.639 "rw_ios_per_sec": 0, 00:16:50.639 "rw_mbytes_per_sec": 0, 00:16:50.639 "r_mbytes_per_sec": 0, 00:16:50.639 "w_mbytes_per_sec": 0 00:16:50.639 }, 00:16:50.639 "claimed": true, 00:16:50.639 "claim_type": "exclusive_write", 00:16:50.639 "zoned": false, 00:16:50.639 "supported_io_types": { 00:16:50.639 "read": true, 00:16:50.639 "write": true, 00:16:50.639 "unmap": true, 00:16:50.639 "flush": true, 00:16:50.639 "reset": true, 00:16:50.639 "nvme_admin": false, 00:16:50.639 "nvme_io": false, 00:16:50.639 "nvme_io_md": false, 00:16:50.639 "write_zeroes": true, 00:16:50.639 "zcopy": true, 00:16:50.639 "get_zone_info": false, 00:16:50.639 "zone_management": false, 00:16:50.639 "zone_append": false, 00:16:50.639 "compare": false, 00:16:50.639 "compare_and_write": false, 00:16:50.639 "abort": true, 00:16:50.639 "seek_hole": false, 00:16:50.639 "seek_data": false, 00:16:50.639 "copy": true, 00:16:50.639 "nvme_iov_md": false 00:16:50.639 }, 00:16:50.639 "memory_domains": [ 00:16:50.639 { 00:16:50.639 "dma_device_id": "system", 00:16:50.639 "dma_device_type": 1 00:16:50.639 }, 00:16:50.639 { 00:16:50.639 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:50.639 "dma_device_type": 2 00:16:50.639 } 00:16:50.639 ], 00:16:50.639 "driver_specific": {} 00:16:50.639 } 00:16:50.639 ] 00:16:50.639 03:19:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.639 03:19:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:50.639 03:19:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:50.639 03:19:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:50.639 03:19:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:50.639 03:19:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:50.639 03:19:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:50.639 03:19:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:50.639 03:19:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:50.639 03:19:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:50.639 03:19:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.639 03:19:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.639 03:19:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.639 03:19:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.639 03:19:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:50.639 03:19:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.639 03:19:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.639 03:19:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.639 03:19:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.639 03:19:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.639 "name": "Existed_Raid", 00:16:50.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.639 "strip_size_kb": 64, 00:16:50.639 "state": "configuring", 00:16:50.639 "raid_level": "raid5f", 00:16:50.639 "superblock": false, 00:16:50.639 "num_base_bdevs": 4, 00:16:50.639 "num_base_bdevs_discovered": 2, 00:16:50.639 "num_base_bdevs_operational": 4, 00:16:50.639 "base_bdevs_list": [ 00:16:50.639 { 00:16:50.639 "name": "BaseBdev1", 00:16:50.639 "uuid": "2678ab60-4bd8-46fc-abec-b3f96314266b", 00:16:50.639 "is_configured": true, 00:16:50.639 "data_offset": 0, 00:16:50.639 "data_size": 65536 00:16:50.639 }, 00:16:50.639 { 00:16:50.639 "name": "BaseBdev2", 00:16:50.639 "uuid": "ccd94c1f-f107-49ce-8c4c-fc5cfe95a4e0", 00:16:50.639 "is_configured": true, 00:16:50.639 "data_offset": 0, 00:16:50.639 "data_size": 65536 00:16:50.639 }, 00:16:50.639 { 00:16:50.639 "name": "BaseBdev3", 00:16:50.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.639 "is_configured": false, 00:16:50.639 "data_offset": 0, 00:16:50.639 "data_size": 0 00:16:50.639 }, 00:16:50.639 { 00:16:50.639 "name": "BaseBdev4", 00:16:50.639 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.639 "is_configured": false, 00:16:50.639 "data_offset": 0, 00:16:50.639 "data_size": 0 00:16:50.639 } 00:16:50.639 ] 00:16:50.639 }' 00:16:50.639 03:19:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.639 03:19:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.899 03:19:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:50.899 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.899 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.899 [2024-10-09 03:19:34.175298] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:50.899 BaseBdev3 00:16:50.899 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.899 03:19:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:50.899 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:50.899 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:50.899 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:50.899 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:50.899 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:50.899 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:50.899 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.899 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.899 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.899 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:50.899 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.899 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:50.899 [ 00:16:50.899 { 00:16:50.899 "name": "BaseBdev3", 00:16:50.899 "aliases": [ 00:16:50.899 "c447748e-f338-43a0-894f-a6bf742b3735" 00:16:50.899 ], 00:16:50.899 "product_name": "Malloc disk", 00:16:50.899 "block_size": 512, 00:16:50.899 "num_blocks": 65536, 00:16:51.159 "uuid": "c447748e-f338-43a0-894f-a6bf742b3735", 00:16:51.159 "assigned_rate_limits": { 00:16:51.159 "rw_ios_per_sec": 0, 00:16:51.159 "rw_mbytes_per_sec": 0, 00:16:51.159 "r_mbytes_per_sec": 0, 00:16:51.159 "w_mbytes_per_sec": 0 00:16:51.159 }, 00:16:51.159 "claimed": true, 00:16:51.159 "claim_type": "exclusive_write", 00:16:51.159 "zoned": false, 00:16:51.159 "supported_io_types": { 00:16:51.159 "read": true, 00:16:51.159 "write": true, 00:16:51.159 "unmap": true, 00:16:51.159 "flush": true, 00:16:51.159 "reset": true, 00:16:51.159 "nvme_admin": false, 00:16:51.159 "nvme_io": false, 00:16:51.159 "nvme_io_md": false, 00:16:51.159 "write_zeroes": true, 00:16:51.159 "zcopy": true, 00:16:51.159 "get_zone_info": false, 00:16:51.159 "zone_management": false, 00:16:51.159 "zone_append": false, 00:16:51.159 "compare": false, 00:16:51.159 "compare_and_write": false, 00:16:51.159 "abort": true, 00:16:51.159 "seek_hole": false, 00:16:51.159 "seek_data": false, 00:16:51.159 "copy": true, 00:16:51.159 "nvme_iov_md": false 00:16:51.159 }, 00:16:51.159 "memory_domains": [ 00:16:51.159 { 00:16:51.159 "dma_device_id": "system", 00:16:51.159 "dma_device_type": 1 00:16:51.159 }, 00:16:51.159 { 00:16:51.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.159 "dma_device_type": 2 00:16:51.159 } 00:16:51.159 ], 00:16:51.159 "driver_specific": {} 00:16:51.159 } 00:16:51.159 ] 00:16:51.159 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.159 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:51.159 03:19:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:51.159 03:19:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:51.159 03:19:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:51.159 03:19:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:51.159 03:19:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:51.159 03:19:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.159 03:19:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.159 03:19:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:51.159 03:19:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.159 03:19:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.159 03:19:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.159 03:19:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.159 03:19:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.159 03:19:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.159 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.159 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.159 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.159 03:19:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.159 "name": "Existed_Raid", 00:16:51.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.159 "strip_size_kb": 64, 00:16:51.159 "state": "configuring", 00:16:51.159 "raid_level": "raid5f", 00:16:51.159 "superblock": false, 00:16:51.159 "num_base_bdevs": 4, 00:16:51.159 "num_base_bdevs_discovered": 3, 00:16:51.159 "num_base_bdevs_operational": 4, 00:16:51.159 "base_bdevs_list": [ 00:16:51.159 { 00:16:51.159 "name": "BaseBdev1", 00:16:51.159 "uuid": "2678ab60-4bd8-46fc-abec-b3f96314266b", 00:16:51.159 "is_configured": true, 00:16:51.159 "data_offset": 0, 00:16:51.159 "data_size": 65536 00:16:51.159 }, 00:16:51.159 { 00:16:51.159 "name": "BaseBdev2", 00:16:51.159 "uuid": "ccd94c1f-f107-49ce-8c4c-fc5cfe95a4e0", 00:16:51.159 "is_configured": true, 00:16:51.159 "data_offset": 0, 00:16:51.159 "data_size": 65536 00:16:51.159 }, 00:16:51.159 { 00:16:51.159 "name": "BaseBdev3", 00:16:51.159 "uuid": "c447748e-f338-43a0-894f-a6bf742b3735", 00:16:51.159 "is_configured": true, 00:16:51.159 "data_offset": 0, 00:16:51.159 "data_size": 65536 00:16:51.159 }, 00:16:51.159 { 00:16:51.159 "name": "BaseBdev4", 00:16:51.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.159 "is_configured": false, 00:16:51.159 "data_offset": 0, 00:16:51.159 "data_size": 0 00:16:51.159 } 00:16:51.159 ] 00:16:51.159 }' 00:16:51.159 03:19:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.159 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.419 03:19:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:51.419 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.419 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.419 [2024-10-09 03:19:34.653939] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:51.419 [2024-10-09 03:19:34.654027] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:51.419 [2024-10-09 03:19:34.654037] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:51.419 [2024-10-09 03:19:34.654337] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:16:51.419 [2024-10-09 03:19:34.661428] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:51.419 [2024-10-09 03:19:34.661459] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:51.419 [2024-10-09 03:19:34.661758] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:51.419 BaseBdev4 00:16:51.419 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.419 03:19:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:16:51.419 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:51.419 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:51.419 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:51.419 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:51.419 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:51.419 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:51.419 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.419 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.419 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.419 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:51.419 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.419 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.419 [ 00:16:51.419 { 00:16:51.419 "name": "BaseBdev4", 00:16:51.419 "aliases": [ 00:16:51.419 "05805c36-92a1-4b1e-a80a-9ddcb7469170" 00:16:51.419 ], 00:16:51.419 "product_name": "Malloc disk", 00:16:51.419 "block_size": 512, 00:16:51.419 "num_blocks": 65536, 00:16:51.419 "uuid": "05805c36-92a1-4b1e-a80a-9ddcb7469170", 00:16:51.419 "assigned_rate_limits": { 00:16:51.419 "rw_ios_per_sec": 0, 00:16:51.419 "rw_mbytes_per_sec": 0, 00:16:51.419 "r_mbytes_per_sec": 0, 00:16:51.419 "w_mbytes_per_sec": 0 00:16:51.419 }, 00:16:51.419 "claimed": true, 00:16:51.419 "claim_type": "exclusive_write", 00:16:51.419 "zoned": false, 00:16:51.419 "supported_io_types": { 00:16:51.419 "read": true, 00:16:51.419 "write": true, 00:16:51.419 "unmap": true, 00:16:51.419 "flush": true, 00:16:51.419 "reset": true, 00:16:51.419 "nvme_admin": false, 00:16:51.419 "nvme_io": false, 00:16:51.419 "nvme_io_md": false, 00:16:51.419 "write_zeroes": true, 00:16:51.419 "zcopy": true, 00:16:51.419 "get_zone_info": false, 00:16:51.419 "zone_management": false, 00:16:51.419 "zone_append": false, 00:16:51.419 "compare": false, 00:16:51.419 "compare_and_write": false, 00:16:51.419 "abort": true, 00:16:51.419 "seek_hole": false, 00:16:51.419 "seek_data": false, 00:16:51.419 "copy": true, 00:16:51.419 "nvme_iov_md": false 00:16:51.419 }, 00:16:51.419 "memory_domains": [ 00:16:51.419 { 00:16:51.419 "dma_device_id": "system", 00:16:51.419 "dma_device_type": 1 00:16:51.419 }, 00:16:51.419 { 00:16:51.419 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.419 "dma_device_type": 2 00:16:51.419 } 00:16:51.419 ], 00:16:51.419 "driver_specific": {} 00:16:51.419 } 00:16:51.419 ] 00:16:51.419 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.419 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:51.419 03:19:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:51.419 03:19:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:51.419 03:19:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:51.419 03:19:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:51.419 03:19:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:51.419 03:19:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:51.419 03:19:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:51.419 03:19:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:51.419 03:19:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:51.419 03:19:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:51.419 03:19:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:51.419 03:19:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:51.419 03:19:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.419 03:19:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.419 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.419 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.679 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.679 03:19:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:51.679 "name": "Existed_Raid", 00:16:51.679 "uuid": "11b7d893-9fb8-4130-90c9-f7b6d673bbd8", 00:16:51.679 "strip_size_kb": 64, 00:16:51.679 "state": "online", 00:16:51.679 "raid_level": "raid5f", 00:16:51.679 "superblock": false, 00:16:51.679 "num_base_bdevs": 4, 00:16:51.679 "num_base_bdevs_discovered": 4, 00:16:51.679 "num_base_bdevs_operational": 4, 00:16:51.679 "base_bdevs_list": [ 00:16:51.679 { 00:16:51.679 "name": "BaseBdev1", 00:16:51.679 "uuid": "2678ab60-4bd8-46fc-abec-b3f96314266b", 00:16:51.679 "is_configured": true, 00:16:51.679 "data_offset": 0, 00:16:51.679 "data_size": 65536 00:16:51.679 }, 00:16:51.679 { 00:16:51.679 "name": "BaseBdev2", 00:16:51.679 "uuid": "ccd94c1f-f107-49ce-8c4c-fc5cfe95a4e0", 00:16:51.679 "is_configured": true, 00:16:51.679 "data_offset": 0, 00:16:51.679 "data_size": 65536 00:16:51.679 }, 00:16:51.679 { 00:16:51.679 "name": "BaseBdev3", 00:16:51.679 "uuid": "c447748e-f338-43a0-894f-a6bf742b3735", 00:16:51.679 "is_configured": true, 00:16:51.679 "data_offset": 0, 00:16:51.679 "data_size": 65536 00:16:51.679 }, 00:16:51.679 { 00:16:51.679 "name": "BaseBdev4", 00:16:51.679 "uuid": "05805c36-92a1-4b1e-a80a-9ddcb7469170", 00:16:51.679 "is_configured": true, 00:16:51.679 "data_offset": 0, 00:16:51.679 "data_size": 65536 00:16:51.679 } 00:16:51.679 ] 00:16:51.679 }' 00:16:51.679 03:19:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:51.679 03:19:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.939 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:51.939 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:51.939 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:51.939 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:51.939 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:51.939 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:51.939 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:51.939 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:51.939 03:19:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.939 03:19:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:51.939 [2024-10-09 03:19:35.165618] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:51.939 03:19:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.939 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:51.939 "name": "Existed_Raid", 00:16:51.939 "aliases": [ 00:16:51.939 "11b7d893-9fb8-4130-90c9-f7b6d673bbd8" 00:16:51.939 ], 00:16:51.939 "product_name": "Raid Volume", 00:16:51.939 "block_size": 512, 00:16:51.939 "num_blocks": 196608, 00:16:51.939 "uuid": "11b7d893-9fb8-4130-90c9-f7b6d673bbd8", 00:16:51.939 "assigned_rate_limits": { 00:16:51.939 "rw_ios_per_sec": 0, 00:16:51.939 "rw_mbytes_per_sec": 0, 00:16:51.939 "r_mbytes_per_sec": 0, 00:16:51.939 "w_mbytes_per_sec": 0 00:16:51.939 }, 00:16:51.940 "claimed": false, 00:16:51.940 "zoned": false, 00:16:51.940 "supported_io_types": { 00:16:51.940 "read": true, 00:16:51.940 "write": true, 00:16:51.940 "unmap": false, 00:16:51.940 "flush": false, 00:16:51.940 "reset": true, 00:16:51.940 "nvme_admin": false, 00:16:51.940 "nvme_io": false, 00:16:51.940 "nvme_io_md": false, 00:16:51.940 "write_zeroes": true, 00:16:51.940 "zcopy": false, 00:16:51.940 "get_zone_info": false, 00:16:51.940 "zone_management": false, 00:16:51.940 "zone_append": false, 00:16:51.940 "compare": false, 00:16:51.940 "compare_and_write": false, 00:16:51.940 "abort": false, 00:16:51.940 "seek_hole": false, 00:16:51.940 "seek_data": false, 00:16:51.940 "copy": false, 00:16:51.940 "nvme_iov_md": false 00:16:51.940 }, 00:16:51.940 "driver_specific": { 00:16:51.940 "raid": { 00:16:51.940 "uuid": "11b7d893-9fb8-4130-90c9-f7b6d673bbd8", 00:16:51.940 "strip_size_kb": 64, 00:16:51.940 "state": "online", 00:16:51.940 "raid_level": "raid5f", 00:16:51.940 "superblock": false, 00:16:51.940 "num_base_bdevs": 4, 00:16:51.940 "num_base_bdevs_discovered": 4, 00:16:51.940 "num_base_bdevs_operational": 4, 00:16:51.940 "base_bdevs_list": [ 00:16:51.940 { 00:16:51.940 "name": "BaseBdev1", 00:16:51.940 "uuid": "2678ab60-4bd8-46fc-abec-b3f96314266b", 00:16:51.940 "is_configured": true, 00:16:51.940 "data_offset": 0, 00:16:51.940 "data_size": 65536 00:16:51.940 }, 00:16:51.940 { 00:16:51.940 "name": "BaseBdev2", 00:16:51.940 "uuid": "ccd94c1f-f107-49ce-8c4c-fc5cfe95a4e0", 00:16:51.940 "is_configured": true, 00:16:51.940 "data_offset": 0, 00:16:51.940 "data_size": 65536 00:16:51.940 }, 00:16:51.940 { 00:16:51.940 "name": "BaseBdev3", 00:16:51.940 "uuid": "c447748e-f338-43a0-894f-a6bf742b3735", 00:16:51.940 "is_configured": true, 00:16:51.940 "data_offset": 0, 00:16:51.940 "data_size": 65536 00:16:51.940 }, 00:16:51.940 { 00:16:51.940 "name": "BaseBdev4", 00:16:51.940 "uuid": "05805c36-92a1-4b1e-a80a-9ddcb7469170", 00:16:51.940 "is_configured": true, 00:16:51.940 "data_offset": 0, 00:16:51.940 "data_size": 65536 00:16:51.940 } 00:16:51.940 ] 00:16:51.940 } 00:16:51.940 } 00:16:51.940 }' 00:16:51.940 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:52.200 BaseBdev2 00:16:52.200 BaseBdev3 00:16:52.200 BaseBdev4' 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.200 03:19:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.200 [2024-10-09 03:19:35.496932] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:52.460 03:19:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.460 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:52.460 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:52.460 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:52.460 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:52.460 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:52.460 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:52.460 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:52.460 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:52.460 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:52.460 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:52.460 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:52.460 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.460 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.460 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.460 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.460 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.460 03:19:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.460 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.460 03:19:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.460 03:19:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.460 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.460 "name": "Existed_Raid", 00:16:52.460 "uuid": "11b7d893-9fb8-4130-90c9-f7b6d673bbd8", 00:16:52.460 "strip_size_kb": 64, 00:16:52.460 "state": "online", 00:16:52.460 "raid_level": "raid5f", 00:16:52.460 "superblock": false, 00:16:52.460 "num_base_bdevs": 4, 00:16:52.460 "num_base_bdevs_discovered": 3, 00:16:52.460 "num_base_bdevs_operational": 3, 00:16:52.460 "base_bdevs_list": [ 00:16:52.460 { 00:16:52.460 "name": null, 00:16:52.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.460 "is_configured": false, 00:16:52.460 "data_offset": 0, 00:16:52.460 "data_size": 65536 00:16:52.460 }, 00:16:52.460 { 00:16:52.460 "name": "BaseBdev2", 00:16:52.460 "uuid": "ccd94c1f-f107-49ce-8c4c-fc5cfe95a4e0", 00:16:52.460 "is_configured": true, 00:16:52.460 "data_offset": 0, 00:16:52.460 "data_size": 65536 00:16:52.460 }, 00:16:52.460 { 00:16:52.460 "name": "BaseBdev3", 00:16:52.460 "uuid": "c447748e-f338-43a0-894f-a6bf742b3735", 00:16:52.460 "is_configured": true, 00:16:52.460 "data_offset": 0, 00:16:52.460 "data_size": 65536 00:16:52.460 }, 00:16:52.460 { 00:16:52.460 "name": "BaseBdev4", 00:16:52.460 "uuid": "05805c36-92a1-4b1e-a80a-9ddcb7469170", 00:16:52.460 "is_configured": true, 00:16:52.460 "data_offset": 0, 00:16:52.460 "data_size": 65536 00:16:52.460 } 00:16:52.460 ] 00:16:52.460 }' 00:16:52.460 03:19:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.460 03:19:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.719 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:52.719 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:52.719 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:52.719 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.719 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.719 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.978 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.978 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:52.978 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:52.978 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:52.978 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.978 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.978 [2024-10-09 03:19:36.064889] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:52.978 [2024-10-09 03:19:36.065021] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:52.978 [2024-10-09 03:19:36.165591] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:52.978 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.978 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:52.978 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:52.978 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.978 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.978 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:52.978 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.978 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.978 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:52.978 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:52.978 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:52.978 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.978 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:52.978 [2024-10-09 03:19:36.225503] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:53.238 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.238 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:53.238 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:53.238 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.238 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.238 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:53.238 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.238 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.238 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:53.238 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:53.238 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:16:53.238 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.238 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.238 [2024-10-09 03:19:36.386993] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:53.238 [2024-10-09 03:19:36.387063] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:53.238 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.238 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:53.238 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:53.238 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.238 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.238 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:53.238 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.238 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.499 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:53.499 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:53.499 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:16:53.499 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:53.499 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:53.499 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:53.499 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.499 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.499 BaseBdev2 00:16:53.499 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.499 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:16:53.499 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:16:53.499 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:53.499 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:53.499 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:53.499 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:53.499 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:53.499 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.499 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.499 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.499 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:53.499 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.499 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.499 [ 00:16:53.499 { 00:16:53.499 "name": "BaseBdev2", 00:16:53.499 "aliases": [ 00:16:53.499 "34e2bb82-005a-4240-873b-202065eb9812" 00:16:53.499 ], 00:16:53.499 "product_name": "Malloc disk", 00:16:53.499 "block_size": 512, 00:16:53.499 "num_blocks": 65536, 00:16:53.499 "uuid": "34e2bb82-005a-4240-873b-202065eb9812", 00:16:53.499 "assigned_rate_limits": { 00:16:53.499 "rw_ios_per_sec": 0, 00:16:53.499 "rw_mbytes_per_sec": 0, 00:16:53.499 "r_mbytes_per_sec": 0, 00:16:53.499 "w_mbytes_per_sec": 0 00:16:53.499 }, 00:16:53.499 "claimed": false, 00:16:53.499 "zoned": false, 00:16:53.499 "supported_io_types": { 00:16:53.499 "read": true, 00:16:53.499 "write": true, 00:16:53.499 "unmap": true, 00:16:53.499 "flush": true, 00:16:53.499 "reset": true, 00:16:53.499 "nvme_admin": false, 00:16:53.499 "nvme_io": false, 00:16:53.499 "nvme_io_md": false, 00:16:53.499 "write_zeroes": true, 00:16:53.499 "zcopy": true, 00:16:53.499 "get_zone_info": false, 00:16:53.499 "zone_management": false, 00:16:53.499 "zone_append": false, 00:16:53.499 "compare": false, 00:16:53.500 "compare_and_write": false, 00:16:53.500 "abort": true, 00:16:53.500 "seek_hole": false, 00:16:53.500 "seek_data": false, 00:16:53.500 "copy": true, 00:16:53.500 "nvme_iov_md": false 00:16:53.500 }, 00:16:53.500 "memory_domains": [ 00:16:53.500 { 00:16:53.500 "dma_device_id": "system", 00:16:53.500 "dma_device_type": 1 00:16:53.500 }, 00:16:53.500 { 00:16:53.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.500 "dma_device_type": 2 00:16:53.500 } 00:16:53.500 ], 00:16:53.500 "driver_specific": {} 00:16:53.500 } 00:16:53.500 ] 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.500 BaseBdev3 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.500 [ 00:16:53.500 { 00:16:53.500 "name": "BaseBdev3", 00:16:53.500 "aliases": [ 00:16:53.500 "9cea93bf-408d-42b0-8f49-89397913d58f" 00:16:53.500 ], 00:16:53.500 "product_name": "Malloc disk", 00:16:53.500 "block_size": 512, 00:16:53.500 "num_blocks": 65536, 00:16:53.500 "uuid": "9cea93bf-408d-42b0-8f49-89397913d58f", 00:16:53.500 "assigned_rate_limits": { 00:16:53.500 "rw_ios_per_sec": 0, 00:16:53.500 "rw_mbytes_per_sec": 0, 00:16:53.500 "r_mbytes_per_sec": 0, 00:16:53.500 "w_mbytes_per_sec": 0 00:16:53.500 }, 00:16:53.500 "claimed": false, 00:16:53.500 "zoned": false, 00:16:53.500 "supported_io_types": { 00:16:53.500 "read": true, 00:16:53.500 "write": true, 00:16:53.500 "unmap": true, 00:16:53.500 "flush": true, 00:16:53.500 "reset": true, 00:16:53.500 "nvme_admin": false, 00:16:53.500 "nvme_io": false, 00:16:53.500 "nvme_io_md": false, 00:16:53.500 "write_zeroes": true, 00:16:53.500 "zcopy": true, 00:16:53.500 "get_zone_info": false, 00:16:53.500 "zone_management": false, 00:16:53.500 "zone_append": false, 00:16:53.500 "compare": false, 00:16:53.500 "compare_and_write": false, 00:16:53.500 "abort": true, 00:16:53.500 "seek_hole": false, 00:16:53.500 "seek_data": false, 00:16:53.500 "copy": true, 00:16:53.500 "nvme_iov_md": false 00:16:53.500 }, 00:16:53.500 "memory_domains": [ 00:16:53.500 { 00:16:53.500 "dma_device_id": "system", 00:16:53.500 "dma_device_type": 1 00:16:53.500 }, 00:16:53.500 { 00:16:53.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.500 "dma_device_type": 2 00:16:53.500 } 00:16:53.500 ], 00:16:53.500 "driver_specific": {} 00:16:53.500 } 00:16:53.500 ] 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.500 BaseBdev4 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.500 [ 00:16:53.500 { 00:16:53.500 "name": "BaseBdev4", 00:16:53.500 "aliases": [ 00:16:53.500 "48132805-daba-4c17-a1ad-18bad9850518" 00:16:53.500 ], 00:16:53.500 "product_name": "Malloc disk", 00:16:53.500 "block_size": 512, 00:16:53.500 "num_blocks": 65536, 00:16:53.500 "uuid": "48132805-daba-4c17-a1ad-18bad9850518", 00:16:53.500 "assigned_rate_limits": { 00:16:53.500 "rw_ios_per_sec": 0, 00:16:53.500 "rw_mbytes_per_sec": 0, 00:16:53.500 "r_mbytes_per_sec": 0, 00:16:53.500 "w_mbytes_per_sec": 0 00:16:53.500 }, 00:16:53.500 "claimed": false, 00:16:53.500 "zoned": false, 00:16:53.500 "supported_io_types": { 00:16:53.500 "read": true, 00:16:53.500 "write": true, 00:16:53.500 "unmap": true, 00:16:53.500 "flush": true, 00:16:53.500 "reset": true, 00:16:53.500 "nvme_admin": false, 00:16:53.500 "nvme_io": false, 00:16:53.500 "nvme_io_md": false, 00:16:53.500 "write_zeroes": true, 00:16:53.500 "zcopy": true, 00:16:53.500 "get_zone_info": false, 00:16:53.500 "zone_management": false, 00:16:53.500 "zone_append": false, 00:16:53.500 "compare": false, 00:16:53.500 "compare_and_write": false, 00:16:53.500 "abort": true, 00:16:53.500 "seek_hole": false, 00:16:53.500 "seek_data": false, 00:16:53.500 "copy": true, 00:16:53.500 "nvme_iov_md": false 00:16:53.500 }, 00:16:53.500 "memory_domains": [ 00:16:53.500 { 00:16:53.500 "dma_device_id": "system", 00:16:53.500 "dma_device_type": 1 00:16:53.500 }, 00:16:53.500 { 00:16:53.500 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.500 "dma_device_type": 2 00:16:53.500 } 00:16:53.500 ], 00:16:53.500 "driver_specific": {} 00:16:53.500 } 00:16:53.500 ] 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.500 [2024-10-09 03:19:36.791463] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:53.500 [2024-10-09 03:19:36.791516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:53.500 [2024-10-09 03:19:36.791539] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:53.500 [2024-10-09 03:19:36.793524] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:53.500 [2024-10-09 03:19:36.793578] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:53.500 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:53.760 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.760 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.760 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.760 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:53.760 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.760 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:53.760 "name": "Existed_Raid", 00:16:53.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.760 "strip_size_kb": 64, 00:16:53.760 "state": "configuring", 00:16:53.760 "raid_level": "raid5f", 00:16:53.760 "superblock": false, 00:16:53.760 "num_base_bdevs": 4, 00:16:53.760 "num_base_bdevs_discovered": 3, 00:16:53.760 "num_base_bdevs_operational": 4, 00:16:53.760 "base_bdevs_list": [ 00:16:53.760 { 00:16:53.760 "name": "BaseBdev1", 00:16:53.760 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.760 "is_configured": false, 00:16:53.760 "data_offset": 0, 00:16:53.760 "data_size": 0 00:16:53.760 }, 00:16:53.760 { 00:16:53.760 "name": "BaseBdev2", 00:16:53.760 "uuid": "34e2bb82-005a-4240-873b-202065eb9812", 00:16:53.760 "is_configured": true, 00:16:53.760 "data_offset": 0, 00:16:53.760 "data_size": 65536 00:16:53.760 }, 00:16:53.760 { 00:16:53.760 "name": "BaseBdev3", 00:16:53.760 "uuid": "9cea93bf-408d-42b0-8f49-89397913d58f", 00:16:53.760 "is_configured": true, 00:16:53.760 "data_offset": 0, 00:16:53.760 "data_size": 65536 00:16:53.760 }, 00:16:53.760 { 00:16:53.760 "name": "BaseBdev4", 00:16:53.760 "uuid": "48132805-daba-4c17-a1ad-18bad9850518", 00:16:53.760 "is_configured": true, 00:16:53.760 "data_offset": 0, 00:16:53.760 "data_size": 65536 00:16:53.760 } 00:16:53.760 ] 00:16:53.760 }' 00:16:53.760 03:19:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:53.760 03:19:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.019 03:19:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:54.019 03:19:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.019 03:19:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.019 [2024-10-09 03:19:37.186968] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:54.019 03:19:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.019 03:19:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:54.019 03:19:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.019 03:19:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:54.019 03:19:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:54.019 03:19:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.019 03:19:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:54.019 03:19:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.019 03:19:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.019 03:19:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.019 03:19:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.019 03:19:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.019 03:19:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.019 03:19:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.019 03:19:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.019 03:19:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.019 03:19:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.019 "name": "Existed_Raid", 00:16:54.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.019 "strip_size_kb": 64, 00:16:54.019 "state": "configuring", 00:16:54.019 "raid_level": "raid5f", 00:16:54.019 "superblock": false, 00:16:54.019 "num_base_bdevs": 4, 00:16:54.019 "num_base_bdevs_discovered": 2, 00:16:54.019 "num_base_bdevs_operational": 4, 00:16:54.019 "base_bdevs_list": [ 00:16:54.019 { 00:16:54.019 "name": "BaseBdev1", 00:16:54.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.019 "is_configured": false, 00:16:54.019 "data_offset": 0, 00:16:54.019 "data_size": 0 00:16:54.019 }, 00:16:54.019 { 00:16:54.019 "name": null, 00:16:54.019 "uuid": "34e2bb82-005a-4240-873b-202065eb9812", 00:16:54.019 "is_configured": false, 00:16:54.019 "data_offset": 0, 00:16:54.019 "data_size": 65536 00:16:54.019 }, 00:16:54.019 { 00:16:54.019 "name": "BaseBdev3", 00:16:54.019 "uuid": "9cea93bf-408d-42b0-8f49-89397913d58f", 00:16:54.019 "is_configured": true, 00:16:54.019 "data_offset": 0, 00:16:54.019 "data_size": 65536 00:16:54.019 }, 00:16:54.019 { 00:16:54.019 "name": "BaseBdev4", 00:16:54.019 "uuid": "48132805-daba-4c17-a1ad-18bad9850518", 00:16:54.019 "is_configured": true, 00:16:54.019 "data_offset": 0, 00:16:54.019 "data_size": 65536 00:16:54.019 } 00:16:54.019 ] 00:16:54.019 }' 00:16:54.019 03:19:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.019 03:19:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.589 [2024-10-09 03:19:37.690828] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:54.589 BaseBdev1 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.589 [ 00:16:54.589 { 00:16:54.589 "name": "BaseBdev1", 00:16:54.589 "aliases": [ 00:16:54.589 "31950ac8-6a42-4d61-9808-1a5cf7706355" 00:16:54.589 ], 00:16:54.589 "product_name": "Malloc disk", 00:16:54.589 "block_size": 512, 00:16:54.589 "num_blocks": 65536, 00:16:54.589 "uuid": "31950ac8-6a42-4d61-9808-1a5cf7706355", 00:16:54.589 "assigned_rate_limits": { 00:16:54.589 "rw_ios_per_sec": 0, 00:16:54.589 "rw_mbytes_per_sec": 0, 00:16:54.589 "r_mbytes_per_sec": 0, 00:16:54.589 "w_mbytes_per_sec": 0 00:16:54.589 }, 00:16:54.589 "claimed": true, 00:16:54.589 "claim_type": "exclusive_write", 00:16:54.589 "zoned": false, 00:16:54.589 "supported_io_types": { 00:16:54.589 "read": true, 00:16:54.589 "write": true, 00:16:54.589 "unmap": true, 00:16:54.589 "flush": true, 00:16:54.589 "reset": true, 00:16:54.589 "nvme_admin": false, 00:16:54.589 "nvme_io": false, 00:16:54.589 "nvme_io_md": false, 00:16:54.589 "write_zeroes": true, 00:16:54.589 "zcopy": true, 00:16:54.589 "get_zone_info": false, 00:16:54.589 "zone_management": false, 00:16:54.589 "zone_append": false, 00:16:54.589 "compare": false, 00:16:54.589 "compare_and_write": false, 00:16:54.589 "abort": true, 00:16:54.589 "seek_hole": false, 00:16:54.589 "seek_data": false, 00:16:54.589 "copy": true, 00:16:54.589 "nvme_iov_md": false 00:16:54.589 }, 00:16:54.589 "memory_domains": [ 00:16:54.589 { 00:16:54.589 "dma_device_id": "system", 00:16:54.589 "dma_device_type": 1 00:16:54.589 }, 00:16:54.589 { 00:16:54.589 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.589 "dma_device_type": 2 00:16:54.589 } 00:16:54.589 ], 00:16:54.589 "driver_specific": {} 00:16:54.589 } 00:16:54.589 ] 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:54.589 "name": "Existed_Raid", 00:16:54.589 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:54.589 "strip_size_kb": 64, 00:16:54.589 "state": "configuring", 00:16:54.589 "raid_level": "raid5f", 00:16:54.589 "superblock": false, 00:16:54.589 "num_base_bdevs": 4, 00:16:54.589 "num_base_bdevs_discovered": 3, 00:16:54.589 "num_base_bdevs_operational": 4, 00:16:54.589 "base_bdevs_list": [ 00:16:54.589 { 00:16:54.589 "name": "BaseBdev1", 00:16:54.589 "uuid": "31950ac8-6a42-4d61-9808-1a5cf7706355", 00:16:54.589 "is_configured": true, 00:16:54.589 "data_offset": 0, 00:16:54.589 "data_size": 65536 00:16:54.589 }, 00:16:54.589 { 00:16:54.589 "name": null, 00:16:54.589 "uuid": "34e2bb82-005a-4240-873b-202065eb9812", 00:16:54.589 "is_configured": false, 00:16:54.589 "data_offset": 0, 00:16:54.589 "data_size": 65536 00:16:54.589 }, 00:16:54.589 { 00:16:54.589 "name": "BaseBdev3", 00:16:54.589 "uuid": "9cea93bf-408d-42b0-8f49-89397913d58f", 00:16:54.589 "is_configured": true, 00:16:54.589 "data_offset": 0, 00:16:54.589 "data_size": 65536 00:16:54.589 }, 00:16:54.589 { 00:16:54.589 "name": "BaseBdev4", 00:16:54.589 "uuid": "48132805-daba-4c17-a1ad-18bad9850518", 00:16:54.589 "is_configured": true, 00:16:54.589 "data_offset": 0, 00:16:54.589 "data_size": 65536 00:16:54.589 } 00:16:54.589 ] 00:16:54.589 }' 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:54.589 03:19:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.158 03:19:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.158 03:19:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.158 03:19:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.158 03:19:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:55.158 03:19:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.158 03:19:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:16:55.158 03:19:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:16:55.158 03:19:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.158 03:19:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.158 [2024-10-09 03:19:38.217980] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:55.158 03:19:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.158 03:19:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:55.158 03:19:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.158 03:19:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:55.158 03:19:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.158 03:19:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.158 03:19:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:55.158 03:19:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.158 03:19:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.158 03:19:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.158 03:19:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.158 03:19:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.159 03:19:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.159 03:19:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.159 03:19:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.159 03:19:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.159 03:19:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.159 "name": "Existed_Raid", 00:16:55.159 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.159 "strip_size_kb": 64, 00:16:55.159 "state": "configuring", 00:16:55.159 "raid_level": "raid5f", 00:16:55.159 "superblock": false, 00:16:55.159 "num_base_bdevs": 4, 00:16:55.159 "num_base_bdevs_discovered": 2, 00:16:55.159 "num_base_bdevs_operational": 4, 00:16:55.159 "base_bdevs_list": [ 00:16:55.159 { 00:16:55.159 "name": "BaseBdev1", 00:16:55.159 "uuid": "31950ac8-6a42-4d61-9808-1a5cf7706355", 00:16:55.159 "is_configured": true, 00:16:55.159 "data_offset": 0, 00:16:55.159 "data_size": 65536 00:16:55.159 }, 00:16:55.159 { 00:16:55.159 "name": null, 00:16:55.159 "uuid": "34e2bb82-005a-4240-873b-202065eb9812", 00:16:55.159 "is_configured": false, 00:16:55.159 "data_offset": 0, 00:16:55.159 "data_size": 65536 00:16:55.159 }, 00:16:55.159 { 00:16:55.159 "name": null, 00:16:55.159 "uuid": "9cea93bf-408d-42b0-8f49-89397913d58f", 00:16:55.159 "is_configured": false, 00:16:55.159 "data_offset": 0, 00:16:55.159 "data_size": 65536 00:16:55.159 }, 00:16:55.159 { 00:16:55.159 "name": "BaseBdev4", 00:16:55.159 "uuid": "48132805-daba-4c17-a1ad-18bad9850518", 00:16:55.159 "is_configured": true, 00:16:55.159 "data_offset": 0, 00:16:55.159 "data_size": 65536 00:16:55.159 } 00:16:55.159 ] 00:16:55.159 }' 00:16:55.159 03:19:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.159 03:19:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.418 03:19:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.418 03:19:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.418 03:19:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.418 03:19:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:55.418 03:19:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.418 03:19:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:16:55.418 03:19:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:55.418 03:19:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.418 03:19:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.418 [2024-10-09 03:19:38.685234] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:55.418 03:19:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.418 03:19:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:55.418 03:19:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.418 03:19:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:55.418 03:19:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.418 03:19:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.418 03:19:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:55.418 03:19:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.418 03:19:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.418 03:19:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.418 03:19:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.418 03:19:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.418 03:19:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.418 03:19:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.419 03:19:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.419 03:19:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.678 03:19:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.678 "name": "Existed_Raid", 00:16:55.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.678 "strip_size_kb": 64, 00:16:55.678 "state": "configuring", 00:16:55.678 "raid_level": "raid5f", 00:16:55.678 "superblock": false, 00:16:55.678 "num_base_bdevs": 4, 00:16:55.678 "num_base_bdevs_discovered": 3, 00:16:55.678 "num_base_bdevs_operational": 4, 00:16:55.678 "base_bdevs_list": [ 00:16:55.678 { 00:16:55.678 "name": "BaseBdev1", 00:16:55.678 "uuid": "31950ac8-6a42-4d61-9808-1a5cf7706355", 00:16:55.678 "is_configured": true, 00:16:55.678 "data_offset": 0, 00:16:55.678 "data_size": 65536 00:16:55.678 }, 00:16:55.678 { 00:16:55.678 "name": null, 00:16:55.678 "uuid": "34e2bb82-005a-4240-873b-202065eb9812", 00:16:55.678 "is_configured": false, 00:16:55.678 "data_offset": 0, 00:16:55.679 "data_size": 65536 00:16:55.679 }, 00:16:55.679 { 00:16:55.679 "name": "BaseBdev3", 00:16:55.679 "uuid": "9cea93bf-408d-42b0-8f49-89397913d58f", 00:16:55.679 "is_configured": true, 00:16:55.679 "data_offset": 0, 00:16:55.679 "data_size": 65536 00:16:55.679 }, 00:16:55.679 { 00:16:55.679 "name": "BaseBdev4", 00:16:55.679 "uuid": "48132805-daba-4c17-a1ad-18bad9850518", 00:16:55.679 "is_configured": true, 00:16:55.679 "data_offset": 0, 00:16:55.679 "data_size": 65536 00:16:55.679 } 00:16:55.679 ] 00:16:55.679 }' 00:16:55.679 03:19:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.679 03:19:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.938 03:19:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.938 03:19:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.938 03:19:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:55.938 03:19:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.938 03:19:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.938 03:19:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:16:55.938 03:19:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:55.938 03:19:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.938 03:19:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.938 [2024-10-09 03:19:39.136972] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:55.938 03:19:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.938 03:19:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:55.938 03:19:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.938 03:19:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:55.938 03:19:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.938 03:19:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.938 03:19:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:55.938 03:19:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.938 03:19:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.938 03:19:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.938 03:19:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.197 03:19:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.197 03:19:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.197 03:19:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.197 03:19:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.197 03:19:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.197 03:19:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.197 "name": "Existed_Raid", 00:16:56.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.197 "strip_size_kb": 64, 00:16:56.197 "state": "configuring", 00:16:56.197 "raid_level": "raid5f", 00:16:56.197 "superblock": false, 00:16:56.197 "num_base_bdevs": 4, 00:16:56.197 "num_base_bdevs_discovered": 2, 00:16:56.197 "num_base_bdevs_operational": 4, 00:16:56.197 "base_bdevs_list": [ 00:16:56.198 { 00:16:56.198 "name": null, 00:16:56.198 "uuid": "31950ac8-6a42-4d61-9808-1a5cf7706355", 00:16:56.198 "is_configured": false, 00:16:56.198 "data_offset": 0, 00:16:56.198 "data_size": 65536 00:16:56.198 }, 00:16:56.198 { 00:16:56.198 "name": null, 00:16:56.198 "uuid": "34e2bb82-005a-4240-873b-202065eb9812", 00:16:56.198 "is_configured": false, 00:16:56.198 "data_offset": 0, 00:16:56.198 "data_size": 65536 00:16:56.198 }, 00:16:56.198 { 00:16:56.198 "name": "BaseBdev3", 00:16:56.198 "uuid": "9cea93bf-408d-42b0-8f49-89397913d58f", 00:16:56.198 "is_configured": true, 00:16:56.198 "data_offset": 0, 00:16:56.198 "data_size": 65536 00:16:56.198 }, 00:16:56.198 { 00:16:56.198 "name": "BaseBdev4", 00:16:56.198 "uuid": "48132805-daba-4c17-a1ad-18bad9850518", 00:16:56.198 "is_configured": true, 00:16:56.198 "data_offset": 0, 00:16:56.198 "data_size": 65536 00:16:56.198 } 00:16:56.198 ] 00:16:56.198 }' 00:16:56.198 03:19:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.198 03:19:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.457 03:19:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:56.457 03:19:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.457 03:19:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.457 03:19:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.457 03:19:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.457 03:19:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:16:56.457 03:19:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:56.457 03:19:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.457 03:19:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.457 [2024-10-09 03:19:39.715340] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:56.457 03:19:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.457 03:19:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:16:56.457 03:19:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:56.457 03:19:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:56.457 03:19:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:56.457 03:19:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:56.457 03:19:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:56.457 03:19:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.457 03:19:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.457 03:19:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.457 03:19:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.457 03:19:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.457 03:19:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.457 03:19:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.457 03:19:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.457 03:19:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.716 03:19:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.716 "name": "Existed_Raid", 00:16:56.716 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.716 "strip_size_kb": 64, 00:16:56.716 "state": "configuring", 00:16:56.716 "raid_level": "raid5f", 00:16:56.716 "superblock": false, 00:16:56.716 "num_base_bdevs": 4, 00:16:56.716 "num_base_bdevs_discovered": 3, 00:16:56.716 "num_base_bdevs_operational": 4, 00:16:56.716 "base_bdevs_list": [ 00:16:56.716 { 00:16:56.716 "name": null, 00:16:56.716 "uuid": "31950ac8-6a42-4d61-9808-1a5cf7706355", 00:16:56.716 "is_configured": false, 00:16:56.716 "data_offset": 0, 00:16:56.716 "data_size": 65536 00:16:56.716 }, 00:16:56.716 { 00:16:56.716 "name": "BaseBdev2", 00:16:56.716 "uuid": "34e2bb82-005a-4240-873b-202065eb9812", 00:16:56.716 "is_configured": true, 00:16:56.716 "data_offset": 0, 00:16:56.716 "data_size": 65536 00:16:56.716 }, 00:16:56.716 { 00:16:56.716 "name": "BaseBdev3", 00:16:56.716 "uuid": "9cea93bf-408d-42b0-8f49-89397913d58f", 00:16:56.716 "is_configured": true, 00:16:56.716 "data_offset": 0, 00:16:56.716 "data_size": 65536 00:16:56.716 }, 00:16:56.716 { 00:16:56.716 "name": "BaseBdev4", 00:16:56.716 "uuid": "48132805-daba-4c17-a1ad-18bad9850518", 00:16:56.716 "is_configured": true, 00:16:56.716 "data_offset": 0, 00:16:56.716 "data_size": 65536 00:16:56.716 } 00:16:56.716 ] 00:16:56.716 }' 00:16:56.716 03:19:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.716 03:19:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.976 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.976 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.976 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.976 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:56.976 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.976 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:16:56.976 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.976 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.976 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.976 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:56.976 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.976 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 31950ac8-6a42-4d61-9808-1a5cf7706355 00:16:56.976 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.976 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.235 [2024-10-09 03:19:40.308646] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:57.235 [2024-10-09 03:19:40.308718] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:16:57.235 [2024-10-09 03:19:40.308726] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:16:57.235 [2024-10-09 03:19:40.309056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:57.235 [2024-10-09 03:19:40.315318] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:16:57.235 [2024-10-09 03:19:40.315346] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:16:57.235 [2024-10-09 03:19:40.315637] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:57.235 NewBaseBdev 00:16:57.235 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.235 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:16:57.235 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:16:57.235 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:16:57.235 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@901 -- # local i 00:16:57.235 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:16:57.235 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:16:57.235 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:16:57.235 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.235 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.235 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.235 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:57.235 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.235 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.235 [ 00:16:57.235 { 00:16:57.235 "name": "NewBaseBdev", 00:16:57.235 "aliases": [ 00:16:57.235 "31950ac8-6a42-4d61-9808-1a5cf7706355" 00:16:57.235 ], 00:16:57.235 "product_name": "Malloc disk", 00:16:57.235 "block_size": 512, 00:16:57.235 "num_blocks": 65536, 00:16:57.235 "uuid": "31950ac8-6a42-4d61-9808-1a5cf7706355", 00:16:57.235 "assigned_rate_limits": { 00:16:57.235 "rw_ios_per_sec": 0, 00:16:57.235 "rw_mbytes_per_sec": 0, 00:16:57.235 "r_mbytes_per_sec": 0, 00:16:57.235 "w_mbytes_per_sec": 0 00:16:57.235 }, 00:16:57.235 "claimed": true, 00:16:57.235 "claim_type": "exclusive_write", 00:16:57.235 "zoned": false, 00:16:57.235 "supported_io_types": { 00:16:57.235 "read": true, 00:16:57.235 "write": true, 00:16:57.235 "unmap": true, 00:16:57.235 "flush": true, 00:16:57.235 "reset": true, 00:16:57.235 "nvme_admin": false, 00:16:57.235 "nvme_io": false, 00:16:57.235 "nvme_io_md": false, 00:16:57.235 "write_zeroes": true, 00:16:57.235 "zcopy": true, 00:16:57.235 "get_zone_info": false, 00:16:57.235 "zone_management": false, 00:16:57.235 "zone_append": false, 00:16:57.235 "compare": false, 00:16:57.235 "compare_and_write": false, 00:16:57.235 "abort": true, 00:16:57.235 "seek_hole": false, 00:16:57.235 "seek_data": false, 00:16:57.235 "copy": true, 00:16:57.235 "nvme_iov_md": false 00:16:57.235 }, 00:16:57.235 "memory_domains": [ 00:16:57.235 { 00:16:57.235 "dma_device_id": "system", 00:16:57.235 "dma_device_type": 1 00:16:57.235 }, 00:16:57.235 { 00:16:57.235 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.235 "dma_device_type": 2 00:16:57.235 } 00:16:57.235 ], 00:16:57.235 "driver_specific": {} 00:16:57.235 } 00:16:57.235 ] 00:16:57.235 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.235 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@907 -- # return 0 00:16:57.235 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:16:57.235 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:57.235 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:57.235 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:57.235 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:57.235 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:57.235 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.235 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.235 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.235 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.235 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.235 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.235 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.235 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.235 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.235 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.235 "name": "Existed_Raid", 00:16:57.235 "uuid": "8a7626e3-d08e-43a5-a021-fc3da971da16", 00:16:57.235 "strip_size_kb": 64, 00:16:57.235 "state": "online", 00:16:57.235 "raid_level": "raid5f", 00:16:57.235 "superblock": false, 00:16:57.235 "num_base_bdevs": 4, 00:16:57.235 "num_base_bdevs_discovered": 4, 00:16:57.235 "num_base_bdevs_operational": 4, 00:16:57.235 "base_bdevs_list": [ 00:16:57.235 { 00:16:57.235 "name": "NewBaseBdev", 00:16:57.235 "uuid": "31950ac8-6a42-4d61-9808-1a5cf7706355", 00:16:57.235 "is_configured": true, 00:16:57.235 "data_offset": 0, 00:16:57.235 "data_size": 65536 00:16:57.235 }, 00:16:57.235 { 00:16:57.235 "name": "BaseBdev2", 00:16:57.235 "uuid": "34e2bb82-005a-4240-873b-202065eb9812", 00:16:57.235 "is_configured": true, 00:16:57.235 "data_offset": 0, 00:16:57.235 "data_size": 65536 00:16:57.235 }, 00:16:57.235 { 00:16:57.235 "name": "BaseBdev3", 00:16:57.235 "uuid": "9cea93bf-408d-42b0-8f49-89397913d58f", 00:16:57.235 "is_configured": true, 00:16:57.235 "data_offset": 0, 00:16:57.235 "data_size": 65536 00:16:57.235 }, 00:16:57.236 { 00:16:57.236 "name": "BaseBdev4", 00:16:57.236 "uuid": "48132805-daba-4c17-a1ad-18bad9850518", 00:16:57.236 "is_configured": true, 00:16:57.236 "data_offset": 0, 00:16:57.236 "data_size": 65536 00:16:57.236 } 00:16:57.236 ] 00:16:57.236 }' 00:16:57.236 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.236 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.495 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:16:57.495 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:57.495 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:57.495 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:57.495 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:57.495 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:57.495 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:57.495 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:57.495 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.495 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.495 [2024-10-09 03:19:40.780314] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:57.755 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.755 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:57.755 "name": "Existed_Raid", 00:16:57.755 "aliases": [ 00:16:57.755 "8a7626e3-d08e-43a5-a021-fc3da971da16" 00:16:57.755 ], 00:16:57.755 "product_name": "Raid Volume", 00:16:57.755 "block_size": 512, 00:16:57.755 "num_blocks": 196608, 00:16:57.755 "uuid": "8a7626e3-d08e-43a5-a021-fc3da971da16", 00:16:57.755 "assigned_rate_limits": { 00:16:57.755 "rw_ios_per_sec": 0, 00:16:57.755 "rw_mbytes_per_sec": 0, 00:16:57.755 "r_mbytes_per_sec": 0, 00:16:57.755 "w_mbytes_per_sec": 0 00:16:57.755 }, 00:16:57.755 "claimed": false, 00:16:57.755 "zoned": false, 00:16:57.755 "supported_io_types": { 00:16:57.755 "read": true, 00:16:57.755 "write": true, 00:16:57.755 "unmap": false, 00:16:57.755 "flush": false, 00:16:57.755 "reset": true, 00:16:57.755 "nvme_admin": false, 00:16:57.755 "nvme_io": false, 00:16:57.755 "nvme_io_md": false, 00:16:57.755 "write_zeroes": true, 00:16:57.755 "zcopy": false, 00:16:57.755 "get_zone_info": false, 00:16:57.755 "zone_management": false, 00:16:57.755 "zone_append": false, 00:16:57.755 "compare": false, 00:16:57.755 "compare_and_write": false, 00:16:57.755 "abort": false, 00:16:57.755 "seek_hole": false, 00:16:57.755 "seek_data": false, 00:16:57.755 "copy": false, 00:16:57.755 "nvme_iov_md": false 00:16:57.755 }, 00:16:57.755 "driver_specific": { 00:16:57.755 "raid": { 00:16:57.755 "uuid": "8a7626e3-d08e-43a5-a021-fc3da971da16", 00:16:57.755 "strip_size_kb": 64, 00:16:57.755 "state": "online", 00:16:57.755 "raid_level": "raid5f", 00:16:57.755 "superblock": false, 00:16:57.755 "num_base_bdevs": 4, 00:16:57.755 "num_base_bdevs_discovered": 4, 00:16:57.755 "num_base_bdevs_operational": 4, 00:16:57.755 "base_bdevs_list": [ 00:16:57.755 { 00:16:57.755 "name": "NewBaseBdev", 00:16:57.755 "uuid": "31950ac8-6a42-4d61-9808-1a5cf7706355", 00:16:57.755 "is_configured": true, 00:16:57.755 "data_offset": 0, 00:16:57.755 "data_size": 65536 00:16:57.755 }, 00:16:57.755 { 00:16:57.755 "name": "BaseBdev2", 00:16:57.755 "uuid": "34e2bb82-005a-4240-873b-202065eb9812", 00:16:57.755 "is_configured": true, 00:16:57.755 "data_offset": 0, 00:16:57.755 "data_size": 65536 00:16:57.755 }, 00:16:57.755 { 00:16:57.755 "name": "BaseBdev3", 00:16:57.755 "uuid": "9cea93bf-408d-42b0-8f49-89397913d58f", 00:16:57.755 "is_configured": true, 00:16:57.755 "data_offset": 0, 00:16:57.755 "data_size": 65536 00:16:57.755 }, 00:16:57.755 { 00:16:57.755 "name": "BaseBdev4", 00:16:57.755 "uuid": "48132805-daba-4c17-a1ad-18bad9850518", 00:16:57.755 "is_configured": true, 00:16:57.755 "data_offset": 0, 00:16:57.755 "data_size": 65536 00:16:57.755 } 00:16:57.755 ] 00:16:57.755 } 00:16:57.755 } 00:16:57.755 }' 00:16:57.755 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:57.755 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:16:57.755 BaseBdev2 00:16:57.755 BaseBdev3 00:16:57.755 BaseBdev4' 00:16:57.755 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:57.755 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:57.755 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:57.755 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:57.755 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:16:57.755 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.755 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.755 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.755 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:57.755 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:57.755 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:57.755 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:57.755 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.755 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.755 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:57.755 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.755 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:57.755 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:57.755 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:57.755 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:57.755 03:19:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:57.755 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.755 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.755 03:19:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.755 03:19:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:57.755 03:19:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:57.755 03:19:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:57.756 03:19:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:16:57.756 03:19:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.756 03:19:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.756 03:19:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:57.756 03:19:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.756 03:19:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:57.756 03:19:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:57.756 03:19:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:57.756 03:19:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.756 03:19:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.023 [2024-10-09 03:19:41.059568] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:58.023 [2024-10-09 03:19:41.059610] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:58.023 [2024-10-09 03:19:41.059694] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:58.023 [2024-10-09 03:19:41.060014] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:58.023 [2024-10-09 03:19:41.060027] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:16:58.023 03:19:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.023 03:19:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83050 00:16:58.023 03:19:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@950 -- # '[' -z 83050 ']' 00:16:58.023 03:19:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # kill -0 83050 00:16:58.023 03:19:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # uname 00:16:58.023 03:19:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:58.023 03:19:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83050 00:16:58.023 03:19:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:58.023 03:19:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:58.023 03:19:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83050' 00:16:58.023 killing process with pid 83050 00:16:58.023 03:19:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@969 -- # kill 83050 00:16:58.023 [2024-10-09 03:19:41.098471] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:58.023 03:19:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@974 -- # wait 83050 00:16:58.299 [2024-10-09 03:19:41.529432] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:59.681 03:19:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:16:59.681 00:16:59.681 real 0m11.625s 00:16:59.681 user 0m18.044s 00:16:59.681 sys 0m2.220s 00:16:59.681 03:19:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:59.681 ************************************ 00:16:59.681 END TEST raid5f_state_function_test 00:16:59.681 ************************************ 00:16:59.681 03:19:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.681 03:19:42 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:16:59.681 03:19:42 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:16:59.681 03:19:42 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:59.681 03:19:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:59.681 ************************************ 00:16:59.681 START TEST raid5f_state_function_test_sb 00:16:59.681 ************************************ 00:16:59.681 03:19:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1125 -- # raid_state_function_test raid5f 4 true 00:16:59.681 03:19:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:59.681 03:19:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:16:59.681 03:19:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:16:59.681 03:19:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:59.681 03:19:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:59.681 03:19:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:59.681 03:19:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:59.681 03:19:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:59.681 03:19:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:59.681 03:19:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:59.681 03:19:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:59.681 03:19:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:59.681 03:19:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:59.681 03:19:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:59.681 03:19:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:59.681 03:19:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:16:59.681 03:19:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:59.681 03:19:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:59.681 03:19:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:59.941 03:19:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:59.941 03:19:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:59.941 03:19:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:59.941 03:19:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:59.941 03:19:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:59.941 03:19:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:59.941 03:19:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:59.941 03:19:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:59.941 03:19:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:16:59.941 03:19:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:16:59.941 03:19:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83722 00:16:59.941 03:19:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:59.941 03:19:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83722' 00:16:59.941 Process raid pid: 83722 00:16:59.941 03:19:42 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83722 00:16:59.941 03:19:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@831 -- # '[' -z 83722 ']' 00:16:59.941 03:19:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:59.941 03:19:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:59.941 03:19:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:59.941 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:59.941 03:19:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:59.941 03:19:42 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:59.941 [2024-10-09 03:19:43.072173] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:16:59.941 [2024-10-09 03:19:43.072379] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:59.941 [2024-10-09 03:19:43.239569] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:00.201 [2024-10-09 03:19:43.490804] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:00.461 [2024-10-09 03:19:43.740320] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:00.461 [2024-10-09 03:19:43.740358] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:00.723 03:19:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:00.723 03:19:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # return 0 00:17:00.723 03:19:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:00.723 03:19:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.723 03:19:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.723 [2024-10-09 03:19:43.910393] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:00.723 [2024-10-09 03:19:43.910530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:00.723 [2024-10-09 03:19:43.910566] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:00.723 [2024-10-09 03:19:43.910590] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:00.723 [2024-10-09 03:19:43.910608] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:00.723 [2024-10-09 03:19:43.910629] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:00.723 [2024-10-09 03:19:43.910647] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:00.723 [2024-10-09 03:19:43.910668] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:00.723 03:19:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.723 03:19:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:00.723 03:19:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:00.723 03:19:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:00.723 03:19:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.723 03:19:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.723 03:19:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:00.723 03:19:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.723 03:19:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.724 03:19:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.724 03:19:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.724 03:19:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.724 03:19:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.724 03:19:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.724 03:19:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:00.724 03:19:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.724 03:19:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.724 "name": "Existed_Raid", 00:17:00.724 "uuid": "1d55bc1d-aad3-4fd7-9aa9-940bab3de0ce", 00:17:00.724 "strip_size_kb": 64, 00:17:00.724 "state": "configuring", 00:17:00.724 "raid_level": "raid5f", 00:17:00.724 "superblock": true, 00:17:00.724 "num_base_bdevs": 4, 00:17:00.724 "num_base_bdevs_discovered": 0, 00:17:00.724 "num_base_bdevs_operational": 4, 00:17:00.724 "base_bdevs_list": [ 00:17:00.724 { 00:17:00.724 "name": "BaseBdev1", 00:17:00.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.724 "is_configured": false, 00:17:00.724 "data_offset": 0, 00:17:00.724 "data_size": 0 00:17:00.724 }, 00:17:00.724 { 00:17:00.724 "name": "BaseBdev2", 00:17:00.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.724 "is_configured": false, 00:17:00.724 "data_offset": 0, 00:17:00.724 "data_size": 0 00:17:00.724 }, 00:17:00.724 { 00:17:00.724 "name": "BaseBdev3", 00:17:00.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.724 "is_configured": false, 00:17:00.724 "data_offset": 0, 00:17:00.724 "data_size": 0 00:17:00.724 }, 00:17:00.724 { 00:17:00.724 "name": "BaseBdev4", 00:17:00.724 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.724 "is_configured": false, 00:17:00.724 "data_offset": 0, 00:17:00.724 "data_size": 0 00:17:00.724 } 00:17:00.724 ] 00:17:00.724 }' 00:17:00.724 03:19:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.724 03:19:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.293 [2024-10-09 03:19:44.373465] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:01.293 [2024-10-09 03:19:44.373576] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.293 [2024-10-09 03:19:44.385475] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:01.293 [2024-10-09 03:19:44.385516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:01.293 [2024-10-09 03:19:44.385525] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:01.293 [2024-10-09 03:19:44.385534] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:01.293 [2024-10-09 03:19:44.385539] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:01.293 [2024-10-09 03:19:44.385548] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:01.293 [2024-10-09 03:19:44.385554] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:01.293 [2024-10-09 03:19:44.385563] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.293 [2024-10-09 03:19:44.459398] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:01.293 BaseBdev1 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.293 [ 00:17:01.293 { 00:17:01.293 "name": "BaseBdev1", 00:17:01.293 "aliases": [ 00:17:01.293 "cef82990-a771-4734-b7ff-bd4dcf2aa723" 00:17:01.293 ], 00:17:01.293 "product_name": "Malloc disk", 00:17:01.293 "block_size": 512, 00:17:01.293 "num_blocks": 65536, 00:17:01.293 "uuid": "cef82990-a771-4734-b7ff-bd4dcf2aa723", 00:17:01.293 "assigned_rate_limits": { 00:17:01.293 "rw_ios_per_sec": 0, 00:17:01.293 "rw_mbytes_per_sec": 0, 00:17:01.293 "r_mbytes_per_sec": 0, 00:17:01.293 "w_mbytes_per_sec": 0 00:17:01.293 }, 00:17:01.293 "claimed": true, 00:17:01.293 "claim_type": "exclusive_write", 00:17:01.293 "zoned": false, 00:17:01.293 "supported_io_types": { 00:17:01.293 "read": true, 00:17:01.293 "write": true, 00:17:01.293 "unmap": true, 00:17:01.293 "flush": true, 00:17:01.293 "reset": true, 00:17:01.293 "nvme_admin": false, 00:17:01.293 "nvme_io": false, 00:17:01.293 "nvme_io_md": false, 00:17:01.293 "write_zeroes": true, 00:17:01.293 "zcopy": true, 00:17:01.293 "get_zone_info": false, 00:17:01.293 "zone_management": false, 00:17:01.293 "zone_append": false, 00:17:01.293 "compare": false, 00:17:01.293 "compare_and_write": false, 00:17:01.293 "abort": true, 00:17:01.293 "seek_hole": false, 00:17:01.293 "seek_data": false, 00:17:01.293 "copy": true, 00:17:01.293 "nvme_iov_md": false 00:17:01.293 }, 00:17:01.293 "memory_domains": [ 00:17:01.293 { 00:17:01.293 "dma_device_id": "system", 00:17:01.293 "dma_device_type": 1 00:17:01.293 }, 00:17:01.293 { 00:17:01.293 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.293 "dma_device_type": 2 00:17:01.293 } 00:17:01.293 ], 00:17:01.293 "driver_specific": {} 00:17:01.293 } 00:17:01.293 ] 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.293 03:19:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.293 "name": "Existed_Raid", 00:17:01.293 "uuid": "70dabf77-9d49-41e2-8812-a58de5e9e1ed", 00:17:01.293 "strip_size_kb": 64, 00:17:01.293 "state": "configuring", 00:17:01.293 "raid_level": "raid5f", 00:17:01.293 "superblock": true, 00:17:01.293 "num_base_bdevs": 4, 00:17:01.293 "num_base_bdevs_discovered": 1, 00:17:01.293 "num_base_bdevs_operational": 4, 00:17:01.293 "base_bdevs_list": [ 00:17:01.293 { 00:17:01.293 "name": "BaseBdev1", 00:17:01.293 "uuid": "cef82990-a771-4734-b7ff-bd4dcf2aa723", 00:17:01.293 "is_configured": true, 00:17:01.293 "data_offset": 2048, 00:17:01.293 "data_size": 63488 00:17:01.293 }, 00:17:01.293 { 00:17:01.293 "name": "BaseBdev2", 00:17:01.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.293 "is_configured": false, 00:17:01.293 "data_offset": 0, 00:17:01.293 "data_size": 0 00:17:01.293 }, 00:17:01.293 { 00:17:01.293 "name": "BaseBdev3", 00:17:01.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.293 "is_configured": false, 00:17:01.293 "data_offset": 0, 00:17:01.293 "data_size": 0 00:17:01.293 }, 00:17:01.293 { 00:17:01.293 "name": "BaseBdev4", 00:17:01.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.294 "is_configured": false, 00:17:01.294 "data_offset": 0, 00:17:01.294 "data_size": 0 00:17:01.294 } 00:17:01.294 ] 00:17:01.294 }' 00:17:01.294 03:19:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.294 03:19:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.863 03:19:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:01.863 03:19:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.863 03:19:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.863 [2024-10-09 03:19:44.918618] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:01.863 [2024-10-09 03:19:44.918665] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:01.863 03:19:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.863 03:19:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:01.863 03:19:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.863 03:19:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.863 [2024-10-09 03:19:44.930662] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:01.863 [2024-10-09 03:19:44.932648] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:01.863 [2024-10-09 03:19:44.932785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:01.863 [2024-10-09 03:19:44.932799] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:01.863 [2024-10-09 03:19:44.932810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:01.863 [2024-10-09 03:19:44.932816] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:17:01.863 [2024-10-09 03:19:44.932824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:17:01.863 03:19:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.863 03:19:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:01.863 03:19:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:01.863 03:19:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:01.863 03:19:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.863 03:19:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:01.863 03:19:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.863 03:19:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.863 03:19:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:01.863 03:19:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.863 03:19:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.863 03:19:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.863 03:19:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.863 03:19:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.863 03:19:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.863 03:19:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.863 03:19:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:01.863 03:19:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.863 03:19:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.863 "name": "Existed_Raid", 00:17:01.863 "uuid": "eceb6bed-aec1-4c07-8bde-9acda1c082e9", 00:17:01.863 "strip_size_kb": 64, 00:17:01.863 "state": "configuring", 00:17:01.863 "raid_level": "raid5f", 00:17:01.863 "superblock": true, 00:17:01.863 "num_base_bdevs": 4, 00:17:01.863 "num_base_bdevs_discovered": 1, 00:17:01.863 "num_base_bdevs_operational": 4, 00:17:01.863 "base_bdevs_list": [ 00:17:01.863 { 00:17:01.863 "name": "BaseBdev1", 00:17:01.863 "uuid": "cef82990-a771-4734-b7ff-bd4dcf2aa723", 00:17:01.863 "is_configured": true, 00:17:01.863 "data_offset": 2048, 00:17:01.863 "data_size": 63488 00:17:01.863 }, 00:17:01.863 { 00:17:01.863 "name": "BaseBdev2", 00:17:01.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.863 "is_configured": false, 00:17:01.863 "data_offset": 0, 00:17:01.863 "data_size": 0 00:17:01.863 }, 00:17:01.863 { 00:17:01.863 "name": "BaseBdev3", 00:17:01.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.863 "is_configured": false, 00:17:01.863 "data_offset": 0, 00:17:01.863 "data_size": 0 00:17:01.863 }, 00:17:01.863 { 00:17:01.863 "name": "BaseBdev4", 00:17:01.863 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.863 "is_configured": false, 00:17:01.863 "data_offset": 0, 00:17:01.863 "data_size": 0 00:17:01.863 } 00:17:01.863 ] 00:17:01.863 }' 00:17:01.863 03:19:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.863 03:19:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.123 03:19:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:02.123 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.123 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.123 [2024-10-09 03:19:45.400591] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:02.123 BaseBdev2 00:17:02.123 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.123 03:19:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:02.123 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:02.123 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:02.123 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:02.123 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:02.123 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:02.123 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:02.123 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.123 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.123 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.123 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:02.123 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.123 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.383 [ 00:17:02.383 { 00:17:02.383 "name": "BaseBdev2", 00:17:02.383 "aliases": [ 00:17:02.383 "c0d8a612-a0ce-4a25-901b-865800845dbb" 00:17:02.383 ], 00:17:02.383 "product_name": "Malloc disk", 00:17:02.383 "block_size": 512, 00:17:02.383 "num_blocks": 65536, 00:17:02.383 "uuid": "c0d8a612-a0ce-4a25-901b-865800845dbb", 00:17:02.383 "assigned_rate_limits": { 00:17:02.383 "rw_ios_per_sec": 0, 00:17:02.383 "rw_mbytes_per_sec": 0, 00:17:02.383 "r_mbytes_per_sec": 0, 00:17:02.383 "w_mbytes_per_sec": 0 00:17:02.383 }, 00:17:02.383 "claimed": true, 00:17:02.383 "claim_type": "exclusive_write", 00:17:02.383 "zoned": false, 00:17:02.383 "supported_io_types": { 00:17:02.383 "read": true, 00:17:02.383 "write": true, 00:17:02.383 "unmap": true, 00:17:02.383 "flush": true, 00:17:02.383 "reset": true, 00:17:02.383 "nvme_admin": false, 00:17:02.383 "nvme_io": false, 00:17:02.383 "nvme_io_md": false, 00:17:02.383 "write_zeroes": true, 00:17:02.383 "zcopy": true, 00:17:02.383 "get_zone_info": false, 00:17:02.383 "zone_management": false, 00:17:02.383 "zone_append": false, 00:17:02.383 "compare": false, 00:17:02.383 "compare_and_write": false, 00:17:02.383 "abort": true, 00:17:02.383 "seek_hole": false, 00:17:02.383 "seek_data": false, 00:17:02.383 "copy": true, 00:17:02.383 "nvme_iov_md": false 00:17:02.383 }, 00:17:02.383 "memory_domains": [ 00:17:02.383 { 00:17:02.383 "dma_device_id": "system", 00:17:02.383 "dma_device_type": 1 00:17:02.383 }, 00:17:02.383 { 00:17:02.383 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.383 "dma_device_type": 2 00:17:02.383 } 00:17:02.383 ], 00:17:02.383 "driver_specific": {} 00:17:02.383 } 00:17:02.383 ] 00:17:02.383 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.383 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:02.383 03:19:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:02.383 03:19:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:02.383 03:19:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:02.383 03:19:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:02.383 03:19:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:02.383 03:19:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:02.383 03:19:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:02.383 03:19:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:02.383 03:19:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.383 03:19:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.383 03:19:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.383 03:19:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.383 03:19:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.383 03:19:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.383 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.383 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.383 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.383 03:19:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.383 "name": "Existed_Raid", 00:17:02.383 "uuid": "eceb6bed-aec1-4c07-8bde-9acda1c082e9", 00:17:02.383 "strip_size_kb": 64, 00:17:02.383 "state": "configuring", 00:17:02.383 "raid_level": "raid5f", 00:17:02.383 "superblock": true, 00:17:02.383 "num_base_bdevs": 4, 00:17:02.383 "num_base_bdevs_discovered": 2, 00:17:02.383 "num_base_bdevs_operational": 4, 00:17:02.383 "base_bdevs_list": [ 00:17:02.383 { 00:17:02.383 "name": "BaseBdev1", 00:17:02.383 "uuid": "cef82990-a771-4734-b7ff-bd4dcf2aa723", 00:17:02.383 "is_configured": true, 00:17:02.383 "data_offset": 2048, 00:17:02.383 "data_size": 63488 00:17:02.383 }, 00:17:02.383 { 00:17:02.383 "name": "BaseBdev2", 00:17:02.383 "uuid": "c0d8a612-a0ce-4a25-901b-865800845dbb", 00:17:02.383 "is_configured": true, 00:17:02.383 "data_offset": 2048, 00:17:02.383 "data_size": 63488 00:17:02.383 }, 00:17:02.383 { 00:17:02.383 "name": "BaseBdev3", 00:17:02.383 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.383 "is_configured": false, 00:17:02.383 "data_offset": 0, 00:17:02.383 "data_size": 0 00:17:02.383 }, 00:17:02.384 { 00:17:02.384 "name": "BaseBdev4", 00:17:02.384 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.384 "is_configured": false, 00:17:02.384 "data_offset": 0, 00:17:02.384 "data_size": 0 00:17:02.384 } 00:17:02.384 ] 00:17:02.384 }' 00:17:02.384 03:19:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.384 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.643 03:19:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:02.643 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.643 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.643 [2024-10-09 03:19:45.927894] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:02.643 BaseBdev3 00:17:02.643 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.643 03:19:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:02.643 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:02.643 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:02.643 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:02.643 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:02.643 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:02.643 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:02.643 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.643 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.643 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.643 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:02.643 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.643 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.903 [ 00:17:02.903 { 00:17:02.903 "name": "BaseBdev3", 00:17:02.903 "aliases": [ 00:17:02.903 "9f612e65-7a88-45cc-8545-e30476de1dbd" 00:17:02.903 ], 00:17:02.903 "product_name": "Malloc disk", 00:17:02.903 "block_size": 512, 00:17:02.903 "num_blocks": 65536, 00:17:02.903 "uuid": "9f612e65-7a88-45cc-8545-e30476de1dbd", 00:17:02.903 "assigned_rate_limits": { 00:17:02.903 "rw_ios_per_sec": 0, 00:17:02.903 "rw_mbytes_per_sec": 0, 00:17:02.903 "r_mbytes_per_sec": 0, 00:17:02.903 "w_mbytes_per_sec": 0 00:17:02.903 }, 00:17:02.903 "claimed": true, 00:17:02.903 "claim_type": "exclusive_write", 00:17:02.903 "zoned": false, 00:17:02.903 "supported_io_types": { 00:17:02.903 "read": true, 00:17:02.903 "write": true, 00:17:02.903 "unmap": true, 00:17:02.903 "flush": true, 00:17:02.903 "reset": true, 00:17:02.903 "nvme_admin": false, 00:17:02.903 "nvme_io": false, 00:17:02.903 "nvme_io_md": false, 00:17:02.903 "write_zeroes": true, 00:17:02.903 "zcopy": true, 00:17:02.903 "get_zone_info": false, 00:17:02.903 "zone_management": false, 00:17:02.903 "zone_append": false, 00:17:02.903 "compare": false, 00:17:02.903 "compare_and_write": false, 00:17:02.903 "abort": true, 00:17:02.903 "seek_hole": false, 00:17:02.903 "seek_data": false, 00:17:02.903 "copy": true, 00:17:02.903 "nvme_iov_md": false 00:17:02.903 }, 00:17:02.903 "memory_domains": [ 00:17:02.903 { 00:17:02.903 "dma_device_id": "system", 00:17:02.903 "dma_device_type": 1 00:17:02.903 }, 00:17:02.903 { 00:17:02.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:02.903 "dma_device_type": 2 00:17:02.903 } 00:17:02.903 ], 00:17:02.903 "driver_specific": {} 00:17:02.903 } 00:17:02.903 ] 00:17:02.903 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.903 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:02.903 03:19:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:02.903 03:19:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:02.903 03:19:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:02.903 03:19:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:02.903 03:19:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:02.903 03:19:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:02.903 03:19:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:02.903 03:19:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:02.903 03:19:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.903 03:19:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.903 03:19:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.903 03:19:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.903 03:19:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.903 03:19:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.903 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.903 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:02.903 03:19:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.903 03:19:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.903 "name": "Existed_Raid", 00:17:02.903 "uuid": "eceb6bed-aec1-4c07-8bde-9acda1c082e9", 00:17:02.903 "strip_size_kb": 64, 00:17:02.903 "state": "configuring", 00:17:02.903 "raid_level": "raid5f", 00:17:02.903 "superblock": true, 00:17:02.903 "num_base_bdevs": 4, 00:17:02.903 "num_base_bdevs_discovered": 3, 00:17:02.903 "num_base_bdevs_operational": 4, 00:17:02.903 "base_bdevs_list": [ 00:17:02.903 { 00:17:02.903 "name": "BaseBdev1", 00:17:02.903 "uuid": "cef82990-a771-4734-b7ff-bd4dcf2aa723", 00:17:02.903 "is_configured": true, 00:17:02.903 "data_offset": 2048, 00:17:02.903 "data_size": 63488 00:17:02.903 }, 00:17:02.903 { 00:17:02.903 "name": "BaseBdev2", 00:17:02.903 "uuid": "c0d8a612-a0ce-4a25-901b-865800845dbb", 00:17:02.903 "is_configured": true, 00:17:02.903 "data_offset": 2048, 00:17:02.903 "data_size": 63488 00:17:02.903 }, 00:17:02.903 { 00:17:02.903 "name": "BaseBdev3", 00:17:02.903 "uuid": "9f612e65-7a88-45cc-8545-e30476de1dbd", 00:17:02.903 "is_configured": true, 00:17:02.903 "data_offset": 2048, 00:17:02.903 "data_size": 63488 00:17:02.903 }, 00:17:02.903 { 00:17:02.903 "name": "BaseBdev4", 00:17:02.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.903 "is_configured": false, 00:17:02.903 "data_offset": 0, 00:17:02.903 "data_size": 0 00:17:02.903 } 00:17:02.903 ] 00:17:02.903 }' 00:17:02.903 03:19:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.903 03:19:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.163 03:19:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:03.163 03:19:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.163 03:19:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.163 [2024-10-09 03:19:46.427073] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:03.163 [2024-10-09 03:19:46.427380] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:03.163 [2024-10-09 03:19:46.427396] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:03.163 [2024-10-09 03:19:46.427674] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:03.163 BaseBdev4 00:17:03.163 03:19:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.163 03:19:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:17:03.163 03:19:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:17:03.163 03:19:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:03.163 03:19:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:03.163 03:19:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:03.163 03:19:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:03.163 03:19:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:03.163 03:19:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.163 03:19:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.163 [2024-10-09 03:19:46.434594] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:03.163 [2024-10-09 03:19:46.434698] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:03.163 [2024-10-09 03:19:46.435017] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.163 03:19:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.163 03:19:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:03.163 03:19:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.163 03:19:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.163 [ 00:17:03.163 { 00:17:03.163 "name": "BaseBdev4", 00:17:03.163 "aliases": [ 00:17:03.163 "0b31dc0d-a925-4c8c-a75a-53a1760fa835" 00:17:03.163 ], 00:17:03.163 "product_name": "Malloc disk", 00:17:03.163 "block_size": 512, 00:17:03.163 "num_blocks": 65536, 00:17:03.163 "uuid": "0b31dc0d-a925-4c8c-a75a-53a1760fa835", 00:17:03.163 "assigned_rate_limits": { 00:17:03.163 "rw_ios_per_sec": 0, 00:17:03.163 "rw_mbytes_per_sec": 0, 00:17:03.164 "r_mbytes_per_sec": 0, 00:17:03.164 "w_mbytes_per_sec": 0 00:17:03.164 }, 00:17:03.164 "claimed": true, 00:17:03.164 "claim_type": "exclusive_write", 00:17:03.164 "zoned": false, 00:17:03.164 "supported_io_types": { 00:17:03.164 "read": true, 00:17:03.164 "write": true, 00:17:03.164 "unmap": true, 00:17:03.164 "flush": true, 00:17:03.164 "reset": true, 00:17:03.164 "nvme_admin": false, 00:17:03.164 "nvme_io": false, 00:17:03.164 "nvme_io_md": false, 00:17:03.164 "write_zeroes": true, 00:17:03.164 "zcopy": true, 00:17:03.164 "get_zone_info": false, 00:17:03.164 "zone_management": false, 00:17:03.164 "zone_append": false, 00:17:03.164 "compare": false, 00:17:03.164 "compare_and_write": false, 00:17:03.423 "abort": true, 00:17:03.423 "seek_hole": false, 00:17:03.423 "seek_data": false, 00:17:03.423 "copy": true, 00:17:03.423 "nvme_iov_md": false 00:17:03.423 }, 00:17:03.423 "memory_domains": [ 00:17:03.424 { 00:17:03.424 "dma_device_id": "system", 00:17:03.424 "dma_device_type": 1 00:17:03.424 }, 00:17:03.424 { 00:17:03.424 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:03.424 "dma_device_type": 2 00:17:03.424 } 00:17:03.424 ], 00:17:03.424 "driver_specific": {} 00:17:03.424 } 00:17:03.424 ] 00:17:03.424 03:19:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.424 03:19:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:03.424 03:19:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:03.424 03:19:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:03.424 03:19:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:03.424 03:19:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:03.424 03:19:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:03.424 03:19:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:03.424 03:19:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:03.424 03:19:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:03.424 03:19:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.424 03:19:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.424 03:19:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.424 03:19:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.424 03:19:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.424 03:19:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.424 03:19:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.424 03:19:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.424 03:19:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.424 03:19:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.424 "name": "Existed_Raid", 00:17:03.424 "uuid": "eceb6bed-aec1-4c07-8bde-9acda1c082e9", 00:17:03.424 "strip_size_kb": 64, 00:17:03.424 "state": "online", 00:17:03.424 "raid_level": "raid5f", 00:17:03.424 "superblock": true, 00:17:03.424 "num_base_bdevs": 4, 00:17:03.424 "num_base_bdevs_discovered": 4, 00:17:03.424 "num_base_bdevs_operational": 4, 00:17:03.424 "base_bdevs_list": [ 00:17:03.424 { 00:17:03.424 "name": "BaseBdev1", 00:17:03.424 "uuid": "cef82990-a771-4734-b7ff-bd4dcf2aa723", 00:17:03.424 "is_configured": true, 00:17:03.424 "data_offset": 2048, 00:17:03.424 "data_size": 63488 00:17:03.424 }, 00:17:03.424 { 00:17:03.424 "name": "BaseBdev2", 00:17:03.424 "uuid": "c0d8a612-a0ce-4a25-901b-865800845dbb", 00:17:03.424 "is_configured": true, 00:17:03.424 "data_offset": 2048, 00:17:03.424 "data_size": 63488 00:17:03.424 }, 00:17:03.424 { 00:17:03.424 "name": "BaseBdev3", 00:17:03.424 "uuid": "9f612e65-7a88-45cc-8545-e30476de1dbd", 00:17:03.424 "is_configured": true, 00:17:03.424 "data_offset": 2048, 00:17:03.424 "data_size": 63488 00:17:03.424 }, 00:17:03.424 { 00:17:03.424 "name": "BaseBdev4", 00:17:03.424 "uuid": "0b31dc0d-a925-4c8c-a75a-53a1760fa835", 00:17:03.424 "is_configured": true, 00:17:03.424 "data_offset": 2048, 00:17:03.424 "data_size": 63488 00:17:03.424 } 00:17:03.424 ] 00:17:03.424 }' 00:17:03.424 03:19:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.424 03:19:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.684 03:19:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:03.684 03:19:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:03.684 03:19:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:03.684 03:19:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:03.684 03:19:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:03.684 03:19:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:03.684 03:19:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:03.684 03:19:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:03.684 03:19:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.684 03:19:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.684 [2024-10-09 03:19:46.931084] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:03.684 03:19:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.684 03:19:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:03.684 "name": "Existed_Raid", 00:17:03.684 "aliases": [ 00:17:03.684 "eceb6bed-aec1-4c07-8bde-9acda1c082e9" 00:17:03.684 ], 00:17:03.684 "product_name": "Raid Volume", 00:17:03.684 "block_size": 512, 00:17:03.684 "num_blocks": 190464, 00:17:03.684 "uuid": "eceb6bed-aec1-4c07-8bde-9acda1c082e9", 00:17:03.684 "assigned_rate_limits": { 00:17:03.684 "rw_ios_per_sec": 0, 00:17:03.684 "rw_mbytes_per_sec": 0, 00:17:03.684 "r_mbytes_per_sec": 0, 00:17:03.684 "w_mbytes_per_sec": 0 00:17:03.684 }, 00:17:03.684 "claimed": false, 00:17:03.684 "zoned": false, 00:17:03.684 "supported_io_types": { 00:17:03.684 "read": true, 00:17:03.684 "write": true, 00:17:03.684 "unmap": false, 00:17:03.684 "flush": false, 00:17:03.684 "reset": true, 00:17:03.684 "nvme_admin": false, 00:17:03.684 "nvme_io": false, 00:17:03.684 "nvme_io_md": false, 00:17:03.684 "write_zeroes": true, 00:17:03.684 "zcopy": false, 00:17:03.684 "get_zone_info": false, 00:17:03.684 "zone_management": false, 00:17:03.684 "zone_append": false, 00:17:03.684 "compare": false, 00:17:03.684 "compare_and_write": false, 00:17:03.684 "abort": false, 00:17:03.684 "seek_hole": false, 00:17:03.684 "seek_data": false, 00:17:03.684 "copy": false, 00:17:03.684 "nvme_iov_md": false 00:17:03.684 }, 00:17:03.684 "driver_specific": { 00:17:03.684 "raid": { 00:17:03.684 "uuid": "eceb6bed-aec1-4c07-8bde-9acda1c082e9", 00:17:03.684 "strip_size_kb": 64, 00:17:03.684 "state": "online", 00:17:03.684 "raid_level": "raid5f", 00:17:03.684 "superblock": true, 00:17:03.684 "num_base_bdevs": 4, 00:17:03.684 "num_base_bdevs_discovered": 4, 00:17:03.684 "num_base_bdevs_operational": 4, 00:17:03.684 "base_bdevs_list": [ 00:17:03.684 { 00:17:03.684 "name": "BaseBdev1", 00:17:03.684 "uuid": "cef82990-a771-4734-b7ff-bd4dcf2aa723", 00:17:03.684 "is_configured": true, 00:17:03.684 "data_offset": 2048, 00:17:03.684 "data_size": 63488 00:17:03.684 }, 00:17:03.684 { 00:17:03.684 "name": "BaseBdev2", 00:17:03.684 "uuid": "c0d8a612-a0ce-4a25-901b-865800845dbb", 00:17:03.684 "is_configured": true, 00:17:03.684 "data_offset": 2048, 00:17:03.684 "data_size": 63488 00:17:03.684 }, 00:17:03.684 { 00:17:03.684 "name": "BaseBdev3", 00:17:03.684 "uuid": "9f612e65-7a88-45cc-8545-e30476de1dbd", 00:17:03.684 "is_configured": true, 00:17:03.684 "data_offset": 2048, 00:17:03.684 "data_size": 63488 00:17:03.684 }, 00:17:03.684 { 00:17:03.684 "name": "BaseBdev4", 00:17:03.684 "uuid": "0b31dc0d-a925-4c8c-a75a-53a1760fa835", 00:17:03.684 "is_configured": true, 00:17:03.684 "data_offset": 2048, 00:17:03.684 "data_size": 63488 00:17:03.684 } 00:17:03.684 ] 00:17:03.684 } 00:17:03.684 } 00:17:03.684 }' 00:17:03.684 03:19:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:03.944 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:03.944 BaseBdev2 00:17:03.944 BaseBdev3 00:17:03.944 BaseBdev4' 00:17:03.944 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.944 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:03.944 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.944 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:03.944 03:19:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.944 03:19:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.944 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.944 03:19:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.944 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:03.944 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:03.944 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.944 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:03.944 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.944 03:19:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.944 03:19:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.944 03:19:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.944 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:03.944 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:03.944 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.944 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:03.944 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:03.944 03:19:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.944 03:19:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.944 03:19:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:03.944 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:03.944 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:03.945 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:03.945 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:03.945 03:19:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:03.945 03:19:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:03.945 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:04.204 03:19:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.204 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:04.204 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:04.204 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:04.204 03:19:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.204 03:19:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.204 [2024-10-09 03:19:47.282313] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:04.204 03:19:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.204 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:04.204 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:04.204 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:04.204 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:17:04.204 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:04.204 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:04.204 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:04.204 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.204 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:04.204 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:04.204 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:04.204 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.204 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.204 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.204 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.204 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.204 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:04.204 03:19:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.204 03:19:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.204 03:19:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.204 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.204 "name": "Existed_Raid", 00:17:04.204 "uuid": "eceb6bed-aec1-4c07-8bde-9acda1c082e9", 00:17:04.204 "strip_size_kb": 64, 00:17:04.204 "state": "online", 00:17:04.204 "raid_level": "raid5f", 00:17:04.204 "superblock": true, 00:17:04.204 "num_base_bdevs": 4, 00:17:04.204 "num_base_bdevs_discovered": 3, 00:17:04.204 "num_base_bdevs_operational": 3, 00:17:04.204 "base_bdevs_list": [ 00:17:04.204 { 00:17:04.204 "name": null, 00:17:04.204 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:04.204 "is_configured": false, 00:17:04.204 "data_offset": 0, 00:17:04.204 "data_size": 63488 00:17:04.204 }, 00:17:04.204 { 00:17:04.204 "name": "BaseBdev2", 00:17:04.204 "uuid": "c0d8a612-a0ce-4a25-901b-865800845dbb", 00:17:04.204 "is_configured": true, 00:17:04.204 "data_offset": 2048, 00:17:04.204 "data_size": 63488 00:17:04.204 }, 00:17:04.204 { 00:17:04.204 "name": "BaseBdev3", 00:17:04.204 "uuid": "9f612e65-7a88-45cc-8545-e30476de1dbd", 00:17:04.204 "is_configured": true, 00:17:04.204 "data_offset": 2048, 00:17:04.204 "data_size": 63488 00:17:04.204 }, 00:17:04.204 { 00:17:04.204 "name": "BaseBdev4", 00:17:04.204 "uuid": "0b31dc0d-a925-4c8c-a75a-53a1760fa835", 00:17:04.204 "is_configured": true, 00:17:04.204 "data_offset": 2048, 00:17:04.204 "data_size": 63488 00:17:04.204 } 00:17:04.204 ] 00:17:04.204 }' 00:17:04.204 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.204 03:19:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.774 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:04.774 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:04.774 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.774 03:19:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.774 03:19:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.774 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:04.774 03:19:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.774 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:04.774 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:04.774 03:19:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:04.774 03:19:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.774 03:19:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.774 [2024-10-09 03:19:47.910684] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:04.774 [2024-10-09 03:19:47.910990] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:04.774 [2024-10-09 03:19:48.010504] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:04.774 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.774 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:04.774 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:04.774 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.774 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:04.774 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.774 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.774 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.774 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:04.774 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:04.774 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:04.774 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.774 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:04.774 [2024-10-09 03:19:48.070406] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:05.034 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.034 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:05.034 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:05.034 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.034 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.034 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:05.034 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.034 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.034 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:05.034 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:05.034 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:17:05.034 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.034 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.034 [2024-10-09 03:19:48.225141] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:17:05.034 [2024-10-09 03:19:48.225277] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:05.034 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.034 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:05.034 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:05.034 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.034 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:05.034 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.034 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.294 BaseBdev2 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.294 [ 00:17:05.294 { 00:17:05.294 "name": "BaseBdev2", 00:17:05.294 "aliases": [ 00:17:05.294 "6d60bb43-9432-4fd4-bad2-9b307dac7704" 00:17:05.294 ], 00:17:05.294 "product_name": "Malloc disk", 00:17:05.294 "block_size": 512, 00:17:05.294 "num_blocks": 65536, 00:17:05.294 "uuid": "6d60bb43-9432-4fd4-bad2-9b307dac7704", 00:17:05.294 "assigned_rate_limits": { 00:17:05.294 "rw_ios_per_sec": 0, 00:17:05.294 "rw_mbytes_per_sec": 0, 00:17:05.294 "r_mbytes_per_sec": 0, 00:17:05.294 "w_mbytes_per_sec": 0 00:17:05.294 }, 00:17:05.294 "claimed": false, 00:17:05.294 "zoned": false, 00:17:05.294 "supported_io_types": { 00:17:05.294 "read": true, 00:17:05.294 "write": true, 00:17:05.294 "unmap": true, 00:17:05.294 "flush": true, 00:17:05.294 "reset": true, 00:17:05.294 "nvme_admin": false, 00:17:05.294 "nvme_io": false, 00:17:05.294 "nvme_io_md": false, 00:17:05.294 "write_zeroes": true, 00:17:05.294 "zcopy": true, 00:17:05.294 "get_zone_info": false, 00:17:05.294 "zone_management": false, 00:17:05.294 "zone_append": false, 00:17:05.294 "compare": false, 00:17:05.294 "compare_and_write": false, 00:17:05.294 "abort": true, 00:17:05.294 "seek_hole": false, 00:17:05.294 "seek_data": false, 00:17:05.294 "copy": true, 00:17:05.294 "nvme_iov_md": false 00:17:05.294 }, 00:17:05.294 "memory_domains": [ 00:17:05.294 { 00:17:05.294 "dma_device_id": "system", 00:17:05.294 "dma_device_type": 1 00:17:05.294 }, 00:17:05.294 { 00:17:05.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.294 "dma_device_type": 2 00:17:05.294 } 00:17:05.294 ], 00:17:05.294 "driver_specific": {} 00:17:05.294 } 00:17:05.294 ] 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.294 BaseBdev3 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev3 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:05.294 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.295 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.295 [ 00:17:05.295 { 00:17:05.295 "name": "BaseBdev3", 00:17:05.295 "aliases": [ 00:17:05.295 "5989e3a1-0904-4a86-a73f-e47498923c60" 00:17:05.295 ], 00:17:05.295 "product_name": "Malloc disk", 00:17:05.295 "block_size": 512, 00:17:05.295 "num_blocks": 65536, 00:17:05.295 "uuid": "5989e3a1-0904-4a86-a73f-e47498923c60", 00:17:05.295 "assigned_rate_limits": { 00:17:05.295 "rw_ios_per_sec": 0, 00:17:05.295 "rw_mbytes_per_sec": 0, 00:17:05.295 "r_mbytes_per_sec": 0, 00:17:05.295 "w_mbytes_per_sec": 0 00:17:05.295 }, 00:17:05.295 "claimed": false, 00:17:05.295 "zoned": false, 00:17:05.295 "supported_io_types": { 00:17:05.295 "read": true, 00:17:05.295 "write": true, 00:17:05.295 "unmap": true, 00:17:05.295 "flush": true, 00:17:05.295 "reset": true, 00:17:05.295 "nvme_admin": false, 00:17:05.295 "nvme_io": false, 00:17:05.295 "nvme_io_md": false, 00:17:05.295 "write_zeroes": true, 00:17:05.295 "zcopy": true, 00:17:05.295 "get_zone_info": false, 00:17:05.295 "zone_management": false, 00:17:05.295 "zone_append": false, 00:17:05.295 "compare": false, 00:17:05.295 "compare_and_write": false, 00:17:05.295 "abort": true, 00:17:05.295 "seek_hole": false, 00:17:05.295 "seek_data": false, 00:17:05.295 "copy": true, 00:17:05.295 "nvme_iov_md": false 00:17:05.295 }, 00:17:05.295 "memory_domains": [ 00:17:05.295 { 00:17:05.295 "dma_device_id": "system", 00:17:05.295 "dma_device_type": 1 00:17:05.295 }, 00:17:05.295 { 00:17:05.295 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.295 "dma_device_type": 2 00:17:05.295 } 00:17:05.295 ], 00:17:05.295 "driver_specific": {} 00:17:05.295 } 00:17:05.295 ] 00:17:05.295 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.295 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:05.295 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:05.295 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:05.295 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:17:05.295 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.295 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.295 BaseBdev4 00:17:05.295 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.295 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:17:05.295 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev4 00:17:05.295 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:05.295 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:05.295 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:05.295 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:05.295 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:05.295 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.295 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.295 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.295 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:17:05.295 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.295 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.554 [ 00:17:05.554 { 00:17:05.554 "name": "BaseBdev4", 00:17:05.554 "aliases": [ 00:17:05.554 "01df8dd1-85f2-4dd7-8520-a63686e8a56f" 00:17:05.554 ], 00:17:05.554 "product_name": "Malloc disk", 00:17:05.554 "block_size": 512, 00:17:05.554 "num_blocks": 65536, 00:17:05.554 "uuid": "01df8dd1-85f2-4dd7-8520-a63686e8a56f", 00:17:05.554 "assigned_rate_limits": { 00:17:05.554 "rw_ios_per_sec": 0, 00:17:05.554 "rw_mbytes_per_sec": 0, 00:17:05.554 "r_mbytes_per_sec": 0, 00:17:05.554 "w_mbytes_per_sec": 0 00:17:05.554 }, 00:17:05.554 "claimed": false, 00:17:05.554 "zoned": false, 00:17:05.554 "supported_io_types": { 00:17:05.554 "read": true, 00:17:05.554 "write": true, 00:17:05.554 "unmap": true, 00:17:05.554 "flush": true, 00:17:05.554 "reset": true, 00:17:05.554 "nvme_admin": false, 00:17:05.554 "nvme_io": false, 00:17:05.554 "nvme_io_md": false, 00:17:05.554 "write_zeroes": true, 00:17:05.554 "zcopy": true, 00:17:05.554 "get_zone_info": false, 00:17:05.554 "zone_management": false, 00:17:05.554 "zone_append": false, 00:17:05.554 "compare": false, 00:17:05.554 "compare_and_write": false, 00:17:05.554 "abort": true, 00:17:05.554 "seek_hole": false, 00:17:05.554 "seek_data": false, 00:17:05.554 "copy": true, 00:17:05.554 "nvme_iov_md": false 00:17:05.554 }, 00:17:05.554 "memory_domains": [ 00:17:05.554 { 00:17:05.554 "dma_device_id": "system", 00:17:05.554 "dma_device_type": 1 00:17:05.554 }, 00:17:05.554 { 00:17:05.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.554 "dma_device_type": 2 00:17:05.554 } 00:17:05.554 ], 00:17:05.554 "driver_specific": {} 00:17:05.554 } 00:17:05.554 ] 00:17:05.554 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.554 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:05.554 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:05.554 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:05.554 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:17:05.554 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.554 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.554 [2024-10-09 03:19:48.625702] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:05.554 [2024-10-09 03:19:48.625761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:05.554 [2024-10-09 03:19:48.625782] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:05.554 [2024-10-09 03:19:48.627776] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:05.554 [2024-10-09 03:19:48.627830] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:05.554 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.555 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:05.555 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:05.555 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:05.555 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:05.555 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:05.555 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:05.555 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.555 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.555 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.555 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.555 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.555 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.555 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.555 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.555 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.555 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.555 "name": "Existed_Raid", 00:17:05.555 "uuid": "2b5b7460-92fa-4614-9f29-90231ca1ecde", 00:17:05.555 "strip_size_kb": 64, 00:17:05.555 "state": "configuring", 00:17:05.555 "raid_level": "raid5f", 00:17:05.555 "superblock": true, 00:17:05.555 "num_base_bdevs": 4, 00:17:05.555 "num_base_bdevs_discovered": 3, 00:17:05.555 "num_base_bdevs_operational": 4, 00:17:05.555 "base_bdevs_list": [ 00:17:05.555 { 00:17:05.555 "name": "BaseBdev1", 00:17:05.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.555 "is_configured": false, 00:17:05.555 "data_offset": 0, 00:17:05.555 "data_size": 0 00:17:05.555 }, 00:17:05.555 { 00:17:05.555 "name": "BaseBdev2", 00:17:05.555 "uuid": "6d60bb43-9432-4fd4-bad2-9b307dac7704", 00:17:05.555 "is_configured": true, 00:17:05.555 "data_offset": 2048, 00:17:05.555 "data_size": 63488 00:17:05.555 }, 00:17:05.555 { 00:17:05.555 "name": "BaseBdev3", 00:17:05.555 "uuid": "5989e3a1-0904-4a86-a73f-e47498923c60", 00:17:05.555 "is_configured": true, 00:17:05.555 "data_offset": 2048, 00:17:05.555 "data_size": 63488 00:17:05.555 }, 00:17:05.555 { 00:17:05.555 "name": "BaseBdev4", 00:17:05.555 "uuid": "01df8dd1-85f2-4dd7-8520-a63686e8a56f", 00:17:05.555 "is_configured": true, 00:17:05.555 "data_offset": 2048, 00:17:05.555 "data_size": 63488 00:17:05.555 } 00:17:05.555 ] 00:17:05.555 }' 00:17:05.555 03:19:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.555 03:19:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.815 03:19:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:05.815 03:19:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.815 03:19:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.815 [2024-10-09 03:19:49.052942] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:05.815 03:19:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.815 03:19:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:05.815 03:19:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:05.815 03:19:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:05.815 03:19:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:05.815 03:19:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:05.815 03:19:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:05.815 03:19:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:05.815 03:19:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:05.815 03:19:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:05.815 03:19:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:05.815 03:19:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:05.815 03:19:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:05.815 03:19:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:05.815 03:19:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:05.815 03:19:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.815 03:19:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:05.815 "name": "Existed_Raid", 00:17:05.815 "uuid": "2b5b7460-92fa-4614-9f29-90231ca1ecde", 00:17:05.815 "strip_size_kb": 64, 00:17:05.815 "state": "configuring", 00:17:05.815 "raid_level": "raid5f", 00:17:05.815 "superblock": true, 00:17:05.815 "num_base_bdevs": 4, 00:17:05.815 "num_base_bdevs_discovered": 2, 00:17:05.815 "num_base_bdevs_operational": 4, 00:17:05.815 "base_bdevs_list": [ 00:17:05.815 { 00:17:05.815 "name": "BaseBdev1", 00:17:05.815 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:05.815 "is_configured": false, 00:17:05.815 "data_offset": 0, 00:17:05.815 "data_size": 0 00:17:05.815 }, 00:17:05.815 { 00:17:05.815 "name": null, 00:17:05.815 "uuid": "6d60bb43-9432-4fd4-bad2-9b307dac7704", 00:17:05.815 "is_configured": false, 00:17:05.815 "data_offset": 0, 00:17:05.815 "data_size": 63488 00:17:05.815 }, 00:17:05.815 { 00:17:05.815 "name": "BaseBdev3", 00:17:05.815 "uuid": "5989e3a1-0904-4a86-a73f-e47498923c60", 00:17:05.815 "is_configured": true, 00:17:05.815 "data_offset": 2048, 00:17:05.815 "data_size": 63488 00:17:05.815 }, 00:17:05.815 { 00:17:05.815 "name": "BaseBdev4", 00:17:05.815 "uuid": "01df8dd1-85f2-4dd7-8520-a63686e8a56f", 00:17:05.815 "is_configured": true, 00:17:05.815 "data_offset": 2048, 00:17:05.815 "data_size": 63488 00:17:05.815 } 00:17:05.815 ] 00:17:05.815 }' 00:17:05.815 03:19:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:05.815 03:19:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.384 03:19:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.384 03:19:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.384 03:19:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.384 03:19:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:06.384 03:19:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.384 03:19:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:06.384 03:19:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:06.384 03:19:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.384 03:19:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.384 [2024-10-09 03:19:49.545498] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:06.384 BaseBdev1 00:17:06.384 03:19:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.384 03:19:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:06.384 03:19:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:17:06.384 03:19:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:06.384 03:19:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:06.384 03:19:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:06.384 03:19:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:06.384 03:19:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:06.384 03:19:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.384 03:19:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.384 03:19:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.384 03:19:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:06.384 03:19:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.384 03:19:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.384 [ 00:17:06.384 { 00:17:06.384 "name": "BaseBdev1", 00:17:06.384 "aliases": [ 00:17:06.384 "5ff5391b-3d22-456b-93cb-fbcf7b5c1703" 00:17:06.384 ], 00:17:06.384 "product_name": "Malloc disk", 00:17:06.384 "block_size": 512, 00:17:06.384 "num_blocks": 65536, 00:17:06.384 "uuid": "5ff5391b-3d22-456b-93cb-fbcf7b5c1703", 00:17:06.384 "assigned_rate_limits": { 00:17:06.384 "rw_ios_per_sec": 0, 00:17:06.384 "rw_mbytes_per_sec": 0, 00:17:06.384 "r_mbytes_per_sec": 0, 00:17:06.384 "w_mbytes_per_sec": 0 00:17:06.384 }, 00:17:06.384 "claimed": true, 00:17:06.384 "claim_type": "exclusive_write", 00:17:06.384 "zoned": false, 00:17:06.384 "supported_io_types": { 00:17:06.384 "read": true, 00:17:06.384 "write": true, 00:17:06.384 "unmap": true, 00:17:06.384 "flush": true, 00:17:06.384 "reset": true, 00:17:06.384 "nvme_admin": false, 00:17:06.384 "nvme_io": false, 00:17:06.384 "nvme_io_md": false, 00:17:06.384 "write_zeroes": true, 00:17:06.384 "zcopy": true, 00:17:06.384 "get_zone_info": false, 00:17:06.384 "zone_management": false, 00:17:06.384 "zone_append": false, 00:17:06.384 "compare": false, 00:17:06.384 "compare_and_write": false, 00:17:06.384 "abort": true, 00:17:06.384 "seek_hole": false, 00:17:06.384 "seek_data": false, 00:17:06.384 "copy": true, 00:17:06.384 "nvme_iov_md": false 00:17:06.384 }, 00:17:06.384 "memory_domains": [ 00:17:06.384 { 00:17:06.384 "dma_device_id": "system", 00:17:06.384 "dma_device_type": 1 00:17:06.384 }, 00:17:06.384 { 00:17:06.385 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:06.385 "dma_device_type": 2 00:17:06.385 } 00:17:06.385 ], 00:17:06.385 "driver_specific": {} 00:17:06.385 } 00:17:06.385 ] 00:17:06.385 03:19:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.385 03:19:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:06.385 03:19:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:06.385 03:19:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:06.385 03:19:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:06.385 03:19:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.385 03:19:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.385 03:19:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:06.385 03:19:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.385 03:19:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.385 03:19:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.385 03:19:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.385 03:19:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.385 03:19:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.385 03:19:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.385 03:19:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.385 03:19:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.385 03:19:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.385 "name": "Existed_Raid", 00:17:06.385 "uuid": "2b5b7460-92fa-4614-9f29-90231ca1ecde", 00:17:06.385 "strip_size_kb": 64, 00:17:06.385 "state": "configuring", 00:17:06.385 "raid_level": "raid5f", 00:17:06.385 "superblock": true, 00:17:06.385 "num_base_bdevs": 4, 00:17:06.385 "num_base_bdevs_discovered": 3, 00:17:06.385 "num_base_bdevs_operational": 4, 00:17:06.385 "base_bdevs_list": [ 00:17:06.385 { 00:17:06.385 "name": "BaseBdev1", 00:17:06.385 "uuid": "5ff5391b-3d22-456b-93cb-fbcf7b5c1703", 00:17:06.385 "is_configured": true, 00:17:06.385 "data_offset": 2048, 00:17:06.385 "data_size": 63488 00:17:06.385 }, 00:17:06.385 { 00:17:06.385 "name": null, 00:17:06.385 "uuid": "6d60bb43-9432-4fd4-bad2-9b307dac7704", 00:17:06.385 "is_configured": false, 00:17:06.385 "data_offset": 0, 00:17:06.385 "data_size": 63488 00:17:06.385 }, 00:17:06.385 { 00:17:06.385 "name": "BaseBdev3", 00:17:06.385 "uuid": "5989e3a1-0904-4a86-a73f-e47498923c60", 00:17:06.385 "is_configured": true, 00:17:06.385 "data_offset": 2048, 00:17:06.385 "data_size": 63488 00:17:06.385 }, 00:17:06.385 { 00:17:06.385 "name": "BaseBdev4", 00:17:06.385 "uuid": "01df8dd1-85f2-4dd7-8520-a63686e8a56f", 00:17:06.385 "is_configured": true, 00:17:06.385 "data_offset": 2048, 00:17:06.385 "data_size": 63488 00:17:06.385 } 00:17:06.385 ] 00:17:06.385 }' 00:17:06.385 03:19:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.385 03:19:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.954 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.954 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:06.954 03:19:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.954 03:19:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.954 03:19:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.954 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:06.954 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:06.954 03:19:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.954 03:19:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.954 [2024-10-09 03:19:50.084879] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:06.954 03:19:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.954 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:06.954 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:06.954 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:06.954 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:06.954 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:06.954 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:06.954 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:06.954 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:06.954 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:06.954 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:06.954 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:06.954 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:06.954 03:19:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.954 03:19:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.954 03:19:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.954 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:06.954 "name": "Existed_Raid", 00:17:06.954 "uuid": "2b5b7460-92fa-4614-9f29-90231ca1ecde", 00:17:06.954 "strip_size_kb": 64, 00:17:06.954 "state": "configuring", 00:17:06.954 "raid_level": "raid5f", 00:17:06.954 "superblock": true, 00:17:06.954 "num_base_bdevs": 4, 00:17:06.954 "num_base_bdevs_discovered": 2, 00:17:06.954 "num_base_bdevs_operational": 4, 00:17:06.954 "base_bdevs_list": [ 00:17:06.954 { 00:17:06.954 "name": "BaseBdev1", 00:17:06.954 "uuid": "5ff5391b-3d22-456b-93cb-fbcf7b5c1703", 00:17:06.954 "is_configured": true, 00:17:06.954 "data_offset": 2048, 00:17:06.954 "data_size": 63488 00:17:06.954 }, 00:17:06.954 { 00:17:06.954 "name": null, 00:17:06.954 "uuid": "6d60bb43-9432-4fd4-bad2-9b307dac7704", 00:17:06.954 "is_configured": false, 00:17:06.954 "data_offset": 0, 00:17:06.954 "data_size": 63488 00:17:06.954 }, 00:17:06.954 { 00:17:06.954 "name": null, 00:17:06.954 "uuid": "5989e3a1-0904-4a86-a73f-e47498923c60", 00:17:06.954 "is_configured": false, 00:17:06.954 "data_offset": 0, 00:17:06.954 "data_size": 63488 00:17:06.954 }, 00:17:06.954 { 00:17:06.954 "name": "BaseBdev4", 00:17:06.954 "uuid": "01df8dd1-85f2-4dd7-8520-a63686e8a56f", 00:17:06.954 "is_configured": true, 00:17:06.954 "data_offset": 2048, 00:17:06.954 "data_size": 63488 00:17:06.954 } 00:17:06.954 ] 00:17:06.954 }' 00:17:06.954 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:06.954 03:19:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.214 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.214 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:07.214 03:19:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.214 03:19:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.479 03:19:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.479 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:07.479 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:07.479 03:19:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.479 03:19:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.479 [2024-10-09 03:19:50.552499] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:07.479 03:19:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.479 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:07.479 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:07.479 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:07.479 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:07.479 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.479 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:07.479 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.479 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.479 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.479 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.479 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.479 03:19:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.479 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.479 03:19:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.479 03:19:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.479 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.479 "name": "Existed_Raid", 00:17:07.479 "uuid": "2b5b7460-92fa-4614-9f29-90231ca1ecde", 00:17:07.479 "strip_size_kb": 64, 00:17:07.479 "state": "configuring", 00:17:07.479 "raid_level": "raid5f", 00:17:07.479 "superblock": true, 00:17:07.479 "num_base_bdevs": 4, 00:17:07.479 "num_base_bdevs_discovered": 3, 00:17:07.479 "num_base_bdevs_operational": 4, 00:17:07.479 "base_bdevs_list": [ 00:17:07.479 { 00:17:07.479 "name": "BaseBdev1", 00:17:07.479 "uuid": "5ff5391b-3d22-456b-93cb-fbcf7b5c1703", 00:17:07.479 "is_configured": true, 00:17:07.479 "data_offset": 2048, 00:17:07.479 "data_size": 63488 00:17:07.479 }, 00:17:07.479 { 00:17:07.479 "name": null, 00:17:07.479 "uuid": "6d60bb43-9432-4fd4-bad2-9b307dac7704", 00:17:07.479 "is_configured": false, 00:17:07.479 "data_offset": 0, 00:17:07.479 "data_size": 63488 00:17:07.479 }, 00:17:07.479 { 00:17:07.479 "name": "BaseBdev3", 00:17:07.479 "uuid": "5989e3a1-0904-4a86-a73f-e47498923c60", 00:17:07.479 "is_configured": true, 00:17:07.479 "data_offset": 2048, 00:17:07.479 "data_size": 63488 00:17:07.479 }, 00:17:07.479 { 00:17:07.479 "name": "BaseBdev4", 00:17:07.479 "uuid": "01df8dd1-85f2-4dd7-8520-a63686e8a56f", 00:17:07.479 "is_configured": true, 00:17:07.479 "data_offset": 2048, 00:17:07.479 "data_size": 63488 00:17:07.479 } 00:17:07.479 ] 00:17:07.479 }' 00:17:07.479 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.479 03:19:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.744 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.744 03:19:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.744 03:19:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.744 03:19:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:07.744 03:19:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.744 03:19:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:07.744 03:19:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:07.744 03:19:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.744 03:19:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.744 [2024-10-09 03:19:51.035719] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:08.003 03:19:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.003 03:19:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:08.003 03:19:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:08.003 03:19:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:08.003 03:19:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:08.004 03:19:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.004 03:19:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:08.004 03:19:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.004 03:19:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.004 03:19:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.004 03:19:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.004 03:19:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.004 03:19:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.004 03:19:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.004 03:19:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.004 03:19:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.004 03:19:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.004 "name": "Existed_Raid", 00:17:08.004 "uuid": "2b5b7460-92fa-4614-9f29-90231ca1ecde", 00:17:08.004 "strip_size_kb": 64, 00:17:08.004 "state": "configuring", 00:17:08.004 "raid_level": "raid5f", 00:17:08.004 "superblock": true, 00:17:08.004 "num_base_bdevs": 4, 00:17:08.004 "num_base_bdevs_discovered": 2, 00:17:08.004 "num_base_bdevs_operational": 4, 00:17:08.004 "base_bdevs_list": [ 00:17:08.004 { 00:17:08.004 "name": null, 00:17:08.004 "uuid": "5ff5391b-3d22-456b-93cb-fbcf7b5c1703", 00:17:08.004 "is_configured": false, 00:17:08.004 "data_offset": 0, 00:17:08.004 "data_size": 63488 00:17:08.004 }, 00:17:08.004 { 00:17:08.004 "name": null, 00:17:08.004 "uuid": "6d60bb43-9432-4fd4-bad2-9b307dac7704", 00:17:08.004 "is_configured": false, 00:17:08.004 "data_offset": 0, 00:17:08.004 "data_size": 63488 00:17:08.004 }, 00:17:08.004 { 00:17:08.004 "name": "BaseBdev3", 00:17:08.004 "uuid": "5989e3a1-0904-4a86-a73f-e47498923c60", 00:17:08.004 "is_configured": true, 00:17:08.004 "data_offset": 2048, 00:17:08.004 "data_size": 63488 00:17:08.004 }, 00:17:08.004 { 00:17:08.004 "name": "BaseBdev4", 00:17:08.004 "uuid": "01df8dd1-85f2-4dd7-8520-a63686e8a56f", 00:17:08.004 "is_configured": true, 00:17:08.004 "data_offset": 2048, 00:17:08.004 "data_size": 63488 00:17:08.004 } 00:17:08.004 ] 00:17:08.004 }' 00:17:08.004 03:19:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.004 03:19:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.263 03:19:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.263 03:19:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.263 03:19:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.263 03:19:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:08.523 03:19:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.523 03:19:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:08.523 03:19:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:08.523 03:19:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.523 03:19:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.523 [2024-10-09 03:19:51.613064] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:08.523 03:19:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.523 03:19:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:17:08.523 03:19:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:08.523 03:19:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:08.523 03:19:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:08.523 03:19:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.523 03:19:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:08.523 03:19:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.523 03:19:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.523 03:19:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.523 03:19:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.523 03:19:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.523 03:19:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.523 03:19:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.523 03:19:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.523 03:19:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.523 03:19:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.523 "name": "Existed_Raid", 00:17:08.523 "uuid": "2b5b7460-92fa-4614-9f29-90231ca1ecde", 00:17:08.523 "strip_size_kb": 64, 00:17:08.523 "state": "configuring", 00:17:08.523 "raid_level": "raid5f", 00:17:08.523 "superblock": true, 00:17:08.523 "num_base_bdevs": 4, 00:17:08.523 "num_base_bdevs_discovered": 3, 00:17:08.523 "num_base_bdevs_operational": 4, 00:17:08.523 "base_bdevs_list": [ 00:17:08.523 { 00:17:08.523 "name": null, 00:17:08.523 "uuid": "5ff5391b-3d22-456b-93cb-fbcf7b5c1703", 00:17:08.523 "is_configured": false, 00:17:08.523 "data_offset": 0, 00:17:08.523 "data_size": 63488 00:17:08.523 }, 00:17:08.523 { 00:17:08.523 "name": "BaseBdev2", 00:17:08.523 "uuid": "6d60bb43-9432-4fd4-bad2-9b307dac7704", 00:17:08.523 "is_configured": true, 00:17:08.523 "data_offset": 2048, 00:17:08.523 "data_size": 63488 00:17:08.523 }, 00:17:08.523 { 00:17:08.523 "name": "BaseBdev3", 00:17:08.523 "uuid": "5989e3a1-0904-4a86-a73f-e47498923c60", 00:17:08.523 "is_configured": true, 00:17:08.523 "data_offset": 2048, 00:17:08.523 "data_size": 63488 00:17:08.523 }, 00:17:08.523 { 00:17:08.523 "name": "BaseBdev4", 00:17:08.523 "uuid": "01df8dd1-85f2-4dd7-8520-a63686e8a56f", 00:17:08.523 "is_configured": true, 00:17:08.523 "data_offset": 2048, 00:17:08.523 "data_size": 63488 00:17:08.523 } 00:17:08.523 ] 00:17:08.523 }' 00:17:08.523 03:19:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.523 03:19:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.783 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.783 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.783 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.783 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:08.783 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.044 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:09.044 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:09.044 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.044 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.044 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.044 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.044 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 5ff5391b-3d22-456b-93cb-fbcf7b5c1703 00:17:09.044 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.044 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.044 [2024-10-09 03:19:52.191136] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:09.044 [2024-10-09 03:19:52.191485] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:09.044 [2024-10-09 03:19:52.191503] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:09.044 [2024-10-09 03:19:52.191782] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:17:09.044 NewBaseBdev 00:17:09.044 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.044 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:09.044 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@899 -- # local bdev_name=NewBaseBdev 00:17:09.044 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:17:09.044 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@901 -- # local i 00:17:09.044 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:17:09.044 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:17:09.044 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:17:09.044 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.044 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.044 [2024-10-09 03:19:52.198719] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:09.044 [2024-10-09 03:19:52.198787] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:09.044 [2024-10-09 03:19:52.199086] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:09.044 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.044 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:09.044 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.044 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.044 [ 00:17:09.044 { 00:17:09.044 "name": "NewBaseBdev", 00:17:09.044 "aliases": [ 00:17:09.044 "5ff5391b-3d22-456b-93cb-fbcf7b5c1703" 00:17:09.044 ], 00:17:09.044 "product_name": "Malloc disk", 00:17:09.044 "block_size": 512, 00:17:09.044 "num_blocks": 65536, 00:17:09.044 "uuid": "5ff5391b-3d22-456b-93cb-fbcf7b5c1703", 00:17:09.044 "assigned_rate_limits": { 00:17:09.044 "rw_ios_per_sec": 0, 00:17:09.044 "rw_mbytes_per_sec": 0, 00:17:09.044 "r_mbytes_per_sec": 0, 00:17:09.044 "w_mbytes_per_sec": 0 00:17:09.044 }, 00:17:09.044 "claimed": true, 00:17:09.044 "claim_type": "exclusive_write", 00:17:09.044 "zoned": false, 00:17:09.044 "supported_io_types": { 00:17:09.044 "read": true, 00:17:09.044 "write": true, 00:17:09.044 "unmap": true, 00:17:09.044 "flush": true, 00:17:09.044 "reset": true, 00:17:09.044 "nvme_admin": false, 00:17:09.044 "nvme_io": false, 00:17:09.044 "nvme_io_md": false, 00:17:09.044 "write_zeroes": true, 00:17:09.044 "zcopy": true, 00:17:09.044 "get_zone_info": false, 00:17:09.044 "zone_management": false, 00:17:09.044 "zone_append": false, 00:17:09.044 "compare": false, 00:17:09.044 "compare_and_write": false, 00:17:09.044 "abort": true, 00:17:09.044 "seek_hole": false, 00:17:09.044 "seek_data": false, 00:17:09.044 "copy": true, 00:17:09.044 "nvme_iov_md": false 00:17:09.044 }, 00:17:09.044 "memory_domains": [ 00:17:09.044 { 00:17:09.044 "dma_device_id": "system", 00:17:09.044 "dma_device_type": 1 00:17:09.044 }, 00:17:09.044 { 00:17:09.044 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.044 "dma_device_type": 2 00:17:09.044 } 00:17:09.044 ], 00:17:09.044 "driver_specific": {} 00:17:09.044 } 00:17:09.044 ] 00:17:09.044 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.044 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@907 -- # return 0 00:17:09.044 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:17:09.044 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:09.044 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:09.044 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:09.044 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:09.044 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:09.044 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.045 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.045 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.045 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.045 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.045 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.045 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.045 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.045 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.045 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.045 "name": "Existed_Raid", 00:17:09.045 "uuid": "2b5b7460-92fa-4614-9f29-90231ca1ecde", 00:17:09.045 "strip_size_kb": 64, 00:17:09.045 "state": "online", 00:17:09.045 "raid_level": "raid5f", 00:17:09.045 "superblock": true, 00:17:09.045 "num_base_bdevs": 4, 00:17:09.045 "num_base_bdevs_discovered": 4, 00:17:09.045 "num_base_bdevs_operational": 4, 00:17:09.045 "base_bdevs_list": [ 00:17:09.045 { 00:17:09.045 "name": "NewBaseBdev", 00:17:09.045 "uuid": "5ff5391b-3d22-456b-93cb-fbcf7b5c1703", 00:17:09.045 "is_configured": true, 00:17:09.045 "data_offset": 2048, 00:17:09.045 "data_size": 63488 00:17:09.045 }, 00:17:09.045 { 00:17:09.045 "name": "BaseBdev2", 00:17:09.045 "uuid": "6d60bb43-9432-4fd4-bad2-9b307dac7704", 00:17:09.045 "is_configured": true, 00:17:09.045 "data_offset": 2048, 00:17:09.045 "data_size": 63488 00:17:09.045 }, 00:17:09.045 { 00:17:09.045 "name": "BaseBdev3", 00:17:09.045 "uuid": "5989e3a1-0904-4a86-a73f-e47498923c60", 00:17:09.045 "is_configured": true, 00:17:09.045 "data_offset": 2048, 00:17:09.045 "data_size": 63488 00:17:09.045 }, 00:17:09.045 { 00:17:09.045 "name": "BaseBdev4", 00:17:09.045 "uuid": "01df8dd1-85f2-4dd7-8520-a63686e8a56f", 00:17:09.045 "is_configured": true, 00:17:09.045 "data_offset": 2048, 00:17:09.045 "data_size": 63488 00:17:09.045 } 00:17:09.045 ] 00:17:09.045 }' 00:17:09.045 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.045 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.615 [2024-10-09 03:19:52.663595] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:09.615 "name": "Existed_Raid", 00:17:09.615 "aliases": [ 00:17:09.615 "2b5b7460-92fa-4614-9f29-90231ca1ecde" 00:17:09.615 ], 00:17:09.615 "product_name": "Raid Volume", 00:17:09.615 "block_size": 512, 00:17:09.615 "num_blocks": 190464, 00:17:09.615 "uuid": "2b5b7460-92fa-4614-9f29-90231ca1ecde", 00:17:09.615 "assigned_rate_limits": { 00:17:09.615 "rw_ios_per_sec": 0, 00:17:09.615 "rw_mbytes_per_sec": 0, 00:17:09.615 "r_mbytes_per_sec": 0, 00:17:09.615 "w_mbytes_per_sec": 0 00:17:09.615 }, 00:17:09.615 "claimed": false, 00:17:09.615 "zoned": false, 00:17:09.615 "supported_io_types": { 00:17:09.615 "read": true, 00:17:09.615 "write": true, 00:17:09.615 "unmap": false, 00:17:09.615 "flush": false, 00:17:09.615 "reset": true, 00:17:09.615 "nvme_admin": false, 00:17:09.615 "nvme_io": false, 00:17:09.615 "nvme_io_md": false, 00:17:09.615 "write_zeroes": true, 00:17:09.615 "zcopy": false, 00:17:09.615 "get_zone_info": false, 00:17:09.615 "zone_management": false, 00:17:09.615 "zone_append": false, 00:17:09.615 "compare": false, 00:17:09.615 "compare_and_write": false, 00:17:09.615 "abort": false, 00:17:09.615 "seek_hole": false, 00:17:09.615 "seek_data": false, 00:17:09.615 "copy": false, 00:17:09.615 "nvme_iov_md": false 00:17:09.615 }, 00:17:09.615 "driver_specific": { 00:17:09.615 "raid": { 00:17:09.615 "uuid": "2b5b7460-92fa-4614-9f29-90231ca1ecde", 00:17:09.615 "strip_size_kb": 64, 00:17:09.615 "state": "online", 00:17:09.615 "raid_level": "raid5f", 00:17:09.615 "superblock": true, 00:17:09.615 "num_base_bdevs": 4, 00:17:09.615 "num_base_bdevs_discovered": 4, 00:17:09.615 "num_base_bdevs_operational": 4, 00:17:09.615 "base_bdevs_list": [ 00:17:09.615 { 00:17:09.615 "name": "NewBaseBdev", 00:17:09.615 "uuid": "5ff5391b-3d22-456b-93cb-fbcf7b5c1703", 00:17:09.615 "is_configured": true, 00:17:09.615 "data_offset": 2048, 00:17:09.615 "data_size": 63488 00:17:09.615 }, 00:17:09.615 { 00:17:09.615 "name": "BaseBdev2", 00:17:09.615 "uuid": "6d60bb43-9432-4fd4-bad2-9b307dac7704", 00:17:09.615 "is_configured": true, 00:17:09.615 "data_offset": 2048, 00:17:09.615 "data_size": 63488 00:17:09.615 }, 00:17:09.615 { 00:17:09.615 "name": "BaseBdev3", 00:17:09.615 "uuid": "5989e3a1-0904-4a86-a73f-e47498923c60", 00:17:09.615 "is_configured": true, 00:17:09.615 "data_offset": 2048, 00:17:09.615 "data_size": 63488 00:17:09.615 }, 00:17:09.615 { 00:17:09.615 "name": "BaseBdev4", 00:17:09.615 "uuid": "01df8dd1-85f2-4dd7-8520-a63686e8a56f", 00:17:09.615 "is_configured": true, 00:17:09.615 "data_offset": 2048, 00:17:09.615 "data_size": 63488 00:17:09.615 } 00:17:09.615 ] 00:17:09.615 } 00:17:09.615 } 00:17:09.615 }' 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:09.615 BaseBdev2 00:17:09.615 BaseBdev3 00:17:09.615 BaseBdev4' 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.615 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.876 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:09.876 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:09.876 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:09.876 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:17:09.876 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:09.876 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.876 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.876 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.876 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:09.876 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:09.876 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:09.876 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.876 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.876 [2024-10-09 03:19:52.982832] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:09.876 [2024-10-09 03:19:52.982875] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:09.876 [2024-10-09 03:19:52.982958] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:09.876 [2024-10-09 03:19:52.983271] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:09.876 [2024-10-09 03:19:52.983281] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:09.876 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.876 03:19:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83722 00:17:09.876 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@950 -- # '[' -z 83722 ']' 00:17:09.876 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # kill -0 83722 00:17:09.876 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # uname 00:17:09.876 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:09.876 03:19:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 83722 00:17:09.876 killing process with pid 83722 00:17:09.876 03:19:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:09.876 03:19:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:09.876 03:19:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 83722' 00:17:09.876 03:19:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@969 -- # kill 83722 00:17:09.876 [2024-10-09 03:19:53.032370] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:09.876 03:19:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@974 -- # wait 83722 00:17:10.446 [2024-10-09 03:19:53.458938] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:11.829 03:19:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:11.829 00:17:11.829 real 0m11.861s 00:17:11.829 user 0m18.422s 00:17:11.829 sys 0m2.260s 00:17:11.829 03:19:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:11.829 03:19:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.829 ************************************ 00:17:11.829 END TEST raid5f_state_function_test_sb 00:17:11.829 ************************************ 00:17:11.829 03:19:54 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:17:11.829 03:19:54 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:11.829 03:19:54 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:11.829 03:19:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:11.829 ************************************ 00:17:11.829 START TEST raid5f_superblock_test 00:17:11.829 ************************************ 00:17:11.829 03:19:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1125 -- # raid_superblock_test raid5f 4 00:17:11.829 03:19:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:17:11.829 03:19:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:17:11.829 03:19:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:11.829 03:19:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:11.829 03:19:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:11.829 03:19:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:11.829 03:19:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:11.829 03:19:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:11.829 03:19:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:11.829 03:19:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:11.829 03:19:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:11.829 03:19:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:11.829 03:19:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:11.829 03:19:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:17:11.829 03:19:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:11.829 03:19:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:11.829 03:19:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84392 00:17:11.829 03:19:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:11.829 03:19:54 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84392 00:17:11.829 03:19:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@831 -- # '[' -z 84392 ']' 00:17:11.829 03:19:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:11.829 03:19:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:11.829 03:19:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:11.829 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:11.829 03:19:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:11.829 03:19:54 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:11.829 [2024-10-09 03:19:55.011362] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:17:11.829 [2024-10-09 03:19:55.011578] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84392 ] 00:17:12.089 [2024-10-09 03:19:55.181938] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:12.349 [2024-10-09 03:19:55.438858] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:12.610 [2024-10-09 03:19:55.671459] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:12.610 [2024-10-09 03:19:55.671502] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:12.610 03:19:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:12.610 03:19:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # return 0 00:17:12.610 03:19:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:12.610 03:19:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:12.610 03:19:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:12.610 03:19:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:12.610 03:19:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:12.610 03:19:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:12.610 03:19:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:12.610 03:19:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:12.610 03:19:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:12.610 03:19:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.610 03:19:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.610 malloc1 00:17:12.610 03:19:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.610 03:19:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:12.610 03:19:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.610 03:19:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.610 [2024-10-09 03:19:55.879980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:12.610 [2024-10-09 03:19:55.880127] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:12.610 [2024-10-09 03:19:55.880170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:12.610 [2024-10-09 03:19:55.880201] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:12.610 [2024-10-09 03:19:55.882490] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:12.610 [2024-10-09 03:19:55.882564] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:12.610 pt1 00:17:12.610 03:19:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.610 03:19:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:12.610 03:19:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:12.610 03:19:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:12.610 03:19:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:12.610 03:19:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:12.610 03:19:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:12.610 03:19:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:12.610 03:19:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:12.610 03:19:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:12.610 03:19:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.610 03:19:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.869 malloc2 00:17:12.869 03:19:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.869 03:19:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:12.869 03:19:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.869 03:19:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.869 [2024-10-09 03:19:55.954086] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:12.869 [2024-10-09 03:19:55.954159] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:12.869 [2024-10-09 03:19:55.954180] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:12.869 [2024-10-09 03:19:55.954189] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:12.869 [2024-10-09 03:19:55.956459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:12.869 [2024-10-09 03:19:55.956546] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:12.869 pt2 00:17:12.869 03:19:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.869 03:19:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:12.869 03:19:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:12.869 03:19:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:12.869 03:19:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:12.869 03:19:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:12.869 03:19:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:12.869 03:19:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:12.869 03:19:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:12.869 03:19:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:12.869 03:19:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.869 03:19:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.869 malloc3 00:17:12.869 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.869 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:12.869 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.869 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.869 [2024-10-09 03:19:56.015547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:12.869 [2024-10-09 03:19:56.015653] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:12.869 [2024-10-09 03:19:56.015691] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:12.869 [2024-10-09 03:19:56.015718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:12.869 [2024-10-09 03:19:56.018078] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:12.869 [2024-10-09 03:19:56.018150] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:12.869 pt3 00:17:12.869 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.869 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:12.869 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:12.869 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:17:12.869 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:17:12.869 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:17:12.869 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:12.869 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:12.869 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:12.869 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:17:12.869 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.869 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.869 malloc4 00:17:12.869 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.869 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:12.869 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.869 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.869 [2024-10-09 03:19:56.082514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:12.869 [2024-10-09 03:19:56.082623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:12.869 [2024-10-09 03:19:56.082659] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:12.869 [2024-10-09 03:19:56.082686] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:12.869 [2024-10-09 03:19:56.085054] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:12.869 [2024-10-09 03:19:56.085137] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:12.869 pt4 00:17:12.869 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.869 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:12.869 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:12.869 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:17:12.869 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.869 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.869 [2024-10-09 03:19:56.094564] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:12.869 [2024-10-09 03:19:56.096548] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:12.869 [2024-10-09 03:19:56.096609] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:12.869 [2024-10-09 03:19:56.096668] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:12.869 [2024-10-09 03:19:56.096927] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:12.869 [2024-10-09 03:19:56.096952] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:12.869 [2024-10-09 03:19:56.097264] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:12.869 [2024-10-09 03:19:56.105083] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:12.869 [2024-10-09 03:19:56.105111] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:12.869 [2024-10-09 03:19:56.105339] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:12.869 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.869 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:12.869 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:12.869 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:12.869 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:12.870 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:12.870 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:12.870 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.870 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.870 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.870 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.870 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.870 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:12.870 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.870 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:12.870 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.870 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.870 "name": "raid_bdev1", 00:17:12.870 "uuid": "4286455f-9768-4bbf-896e-99705c274c05", 00:17:12.870 "strip_size_kb": 64, 00:17:12.870 "state": "online", 00:17:12.870 "raid_level": "raid5f", 00:17:12.870 "superblock": true, 00:17:12.870 "num_base_bdevs": 4, 00:17:12.870 "num_base_bdevs_discovered": 4, 00:17:12.870 "num_base_bdevs_operational": 4, 00:17:12.870 "base_bdevs_list": [ 00:17:12.870 { 00:17:12.870 "name": "pt1", 00:17:12.870 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:12.870 "is_configured": true, 00:17:12.870 "data_offset": 2048, 00:17:12.870 "data_size": 63488 00:17:12.870 }, 00:17:12.870 { 00:17:12.870 "name": "pt2", 00:17:12.870 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:12.870 "is_configured": true, 00:17:12.870 "data_offset": 2048, 00:17:12.870 "data_size": 63488 00:17:12.870 }, 00:17:12.870 { 00:17:12.870 "name": "pt3", 00:17:12.870 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:12.870 "is_configured": true, 00:17:12.870 "data_offset": 2048, 00:17:12.870 "data_size": 63488 00:17:12.870 }, 00:17:12.870 { 00:17:12.870 "name": "pt4", 00:17:12.870 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:12.870 "is_configured": true, 00:17:12.870 "data_offset": 2048, 00:17:12.870 "data_size": 63488 00:17:12.870 } 00:17:12.870 ] 00:17:12.870 }' 00:17:12.870 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.870 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.440 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:13.440 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:13.440 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:13.440 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:13.440 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:13.440 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:13.440 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:13.440 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:13.440 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.440 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.440 [2024-10-09 03:19:56.530526] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:13.440 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.440 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:13.440 "name": "raid_bdev1", 00:17:13.440 "aliases": [ 00:17:13.440 "4286455f-9768-4bbf-896e-99705c274c05" 00:17:13.440 ], 00:17:13.440 "product_name": "Raid Volume", 00:17:13.440 "block_size": 512, 00:17:13.440 "num_blocks": 190464, 00:17:13.440 "uuid": "4286455f-9768-4bbf-896e-99705c274c05", 00:17:13.440 "assigned_rate_limits": { 00:17:13.440 "rw_ios_per_sec": 0, 00:17:13.440 "rw_mbytes_per_sec": 0, 00:17:13.440 "r_mbytes_per_sec": 0, 00:17:13.440 "w_mbytes_per_sec": 0 00:17:13.440 }, 00:17:13.440 "claimed": false, 00:17:13.440 "zoned": false, 00:17:13.440 "supported_io_types": { 00:17:13.440 "read": true, 00:17:13.440 "write": true, 00:17:13.440 "unmap": false, 00:17:13.440 "flush": false, 00:17:13.440 "reset": true, 00:17:13.440 "nvme_admin": false, 00:17:13.440 "nvme_io": false, 00:17:13.440 "nvme_io_md": false, 00:17:13.440 "write_zeroes": true, 00:17:13.440 "zcopy": false, 00:17:13.440 "get_zone_info": false, 00:17:13.440 "zone_management": false, 00:17:13.440 "zone_append": false, 00:17:13.440 "compare": false, 00:17:13.440 "compare_and_write": false, 00:17:13.440 "abort": false, 00:17:13.440 "seek_hole": false, 00:17:13.440 "seek_data": false, 00:17:13.440 "copy": false, 00:17:13.440 "nvme_iov_md": false 00:17:13.440 }, 00:17:13.440 "driver_specific": { 00:17:13.440 "raid": { 00:17:13.440 "uuid": "4286455f-9768-4bbf-896e-99705c274c05", 00:17:13.440 "strip_size_kb": 64, 00:17:13.440 "state": "online", 00:17:13.440 "raid_level": "raid5f", 00:17:13.440 "superblock": true, 00:17:13.440 "num_base_bdevs": 4, 00:17:13.440 "num_base_bdevs_discovered": 4, 00:17:13.440 "num_base_bdevs_operational": 4, 00:17:13.440 "base_bdevs_list": [ 00:17:13.440 { 00:17:13.440 "name": "pt1", 00:17:13.440 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:13.440 "is_configured": true, 00:17:13.440 "data_offset": 2048, 00:17:13.440 "data_size": 63488 00:17:13.440 }, 00:17:13.440 { 00:17:13.440 "name": "pt2", 00:17:13.440 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:13.440 "is_configured": true, 00:17:13.440 "data_offset": 2048, 00:17:13.440 "data_size": 63488 00:17:13.440 }, 00:17:13.440 { 00:17:13.440 "name": "pt3", 00:17:13.440 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:13.440 "is_configured": true, 00:17:13.440 "data_offset": 2048, 00:17:13.440 "data_size": 63488 00:17:13.440 }, 00:17:13.440 { 00:17:13.440 "name": "pt4", 00:17:13.440 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:13.440 "is_configured": true, 00:17:13.440 "data_offset": 2048, 00:17:13.440 "data_size": 63488 00:17:13.440 } 00:17:13.440 ] 00:17:13.440 } 00:17:13.440 } 00:17:13.440 }' 00:17:13.440 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:13.440 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:13.440 pt2 00:17:13.440 pt3 00:17:13.440 pt4' 00:17:13.440 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:13.440 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:13.440 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:13.440 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:13.440 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.440 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.440 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:13.440 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.440 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:13.440 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:13.440 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:13.440 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:13.440 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.440 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.440 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:13.440 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.440 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:13.440 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:13.440 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:13.440 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:13.440 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.440 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.440 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:13.700 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.700 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:13.700 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:13.700 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:13.700 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:13.700 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:13.700 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.700 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.700 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.700 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:13.700 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:13.700 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:13.700 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:13.700 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.700 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.700 [2024-10-09 03:19:56.841961] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:13.700 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.700 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=4286455f-9768-4bbf-896e-99705c274c05 00:17:13.700 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 4286455f-9768-4bbf-896e-99705c274c05 ']' 00:17:13.700 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:13.700 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.701 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.701 [2024-10-09 03:19:56.885721] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:13.701 [2024-10-09 03:19:56.885785] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:13.701 [2024-10-09 03:19:56.885887] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:13.701 [2024-10-09 03:19:56.885985] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:13.701 [2024-10-09 03:19:56.886023] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:13.701 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.701 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:13.701 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.701 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.701 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.701 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.701 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:13.701 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:13.701 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:13.701 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:13.701 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.701 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.701 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.701 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:13.701 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:13.701 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.701 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.701 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.701 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:13.701 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:13.701 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.701 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.701 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.701 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:13.701 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:17:13.701 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.701 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.701 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.701 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:13.701 03:19:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:13.701 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.701 03:19:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.961 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.961 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:13.961 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:13.961 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@650 -- # local es=0 00:17:13.961 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:13.961 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:17:13.961 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:13.961 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:17:13.961 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:13.961 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:17:13.961 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.961 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.961 [2024-10-09 03:19:57.049456] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:13.961 [2024-10-09 03:19:57.051406] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:13.961 [2024-10-09 03:19:57.051453] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:13.961 [2024-10-09 03:19:57.051483] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:17:13.961 [2024-10-09 03:19:57.051525] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:13.961 [2024-10-09 03:19:57.051565] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:13.961 [2024-10-09 03:19:57.051582] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:13.961 [2024-10-09 03:19:57.051599] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:17:13.961 [2024-10-09 03:19:57.051611] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:13.961 [2024-10-09 03:19:57.051622] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:13.961 request: 00:17:13.961 { 00:17:13.961 "name": "raid_bdev1", 00:17:13.961 "raid_level": "raid5f", 00:17:13.961 "base_bdevs": [ 00:17:13.961 "malloc1", 00:17:13.961 "malloc2", 00:17:13.961 "malloc3", 00:17:13.961 "malloc4" 00:17:13.961 ], 00:17:13.961 "strip_size_kb": 64, 00:17:13.961 "superblock": false, 00:17:13.961 "method": "bdev_raid_create", 00:17:13.961 "req_id": 1 00:17:13.961 } 00:17:13.961 Got JSON-RPC error response 00:17:13.961 response: 00:17:13.961 { 00:17:13.961 "code": -17, 00:17:13.961 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:13.961 } 00:17:13.961 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:17:13.961 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@653 -- # es=1 00:17:13.961 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:13.961 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:13.961 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:13.961 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:13.961 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.961 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.961 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.961 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.961 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:13.961 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:13.961 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:13.961 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.961 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.961 [2024-10-09 03:19:57.105329] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:13.961 [2024-10-09 03:19:57.105376] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:13.961 [2024-10-09 03:19:57.105391] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:13.961 [2024-10-09 03:19:57.105401] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:13.961 [2024-10-09 03:19:57.107669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:13.962 [2024-10-09 03:19:57.107708] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:13.962 [2024-10-09 03:19:57.107767] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:13.962 [2024-10-09 03:19:57.107831] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:13.962 pt1 00:17:13.962 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.962 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:17:13.962 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:13.962 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:13.962 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:13.962 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:13.962 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:13.962 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.962 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.962 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.962 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.962 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.962 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:13.962 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.962 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:13.962 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.962 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.962 "name": "raid_bdev1", 00:17:13.962 "uuid": "4286455f-9768-4bbf-896e-99705c274c05", 00:17:13.962 "strip_size_kb": 64, 00:17:13.962 "state": "configuring", 00:17:13.962 "raid_level": "raid5f", 00:17:13.962 "superblock": true, 00:17:13.962 "num_base_bdevs": 4, 00:17:13.962 "num_base_bdevs_discovered": 1, 00:17:13.962 "num_base_bdevs_operational": 4, 00:17:13.962 "base_bdevs_list": [ 00:17:13.962 { 00:17:13.962 "name": "pt1", 00:17:13.962 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:13.962 "is_configured": true, 00:17:13.962 "data_offset": 2048, 00:17:13.962 "data_size": 63488 00:17:13.962 }, 00:17:13.962 { 00:17:13.962 "name": null, 00:17:13.962 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:13.962 "is_configured": false, 00:17:13.962 "data_offset": 2048, 00:17:13.962 "data_size": 63488 00:17:13.962 }, 00:17:13.962 { 00:17:13.962 "name": null, 00:17:13.962 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:13.962 "is_configured": false, 00:17:13.962 "data_offset": 2048, 00:17:13.962 "data_size": 63488 00:17:13.962 }, 00:17:13.962 { 00:17:13.962 "name": null, 00:17:13.962 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:13.962 "is_configured": false, 00:17:13.962 "data_offset": 2048, 00:17:13.962 "data_size": 63488 00:17:13.962 } 00:17:13.962 ] 00:17:13.962 }' 00:17:13.962 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.962 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.532 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:17:14.532 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:14.532 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.532 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.532 [2024-10-09 03:19:57.536688] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:14.532 [2024-10-09 03:19:57.536784] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.532 [2024-10-09 03:19:57.536814] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:14.532 [2024-10-09 03:19:57.536854] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.532 [2024-10-09 03:19:57.537228] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.532 [2024-10-09 03:19:57.537285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:14.532 [2024-10-09 03:19:57.537366] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:14.532 [2024-10-09 03:19:57.537412] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:14.532 pt2 00:17:14.532 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.532 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:14.532 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.532 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.532 [2024-10-09 03:19:57.548692] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:14.532 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.532 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:17:14.532 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.532 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:14.532 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:14.532 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:14.532 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:14.532 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.532 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.532 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.532 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.532 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.532 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.532 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.532 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.532 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.532 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.532 "name": "raid_bdev1", 00:17:14.532 "uuid": "4286455f-9768-4bbf-896e-99705c274c05", 00:17:14.532 "strip_size_kb": 64, 00:17:14.532 "state": "configuring", 00:17:14.532 "raid_level": "raid5f", 00:17:14.532 "superblock": true, 00:17:14.532 "num_base_bdevs": 4, 00:17:14.532 "num_base_bdevs_discovered": 1, 00:17:14.532 "num_base_bdevs_operational": 4, 00:17:14.532 "base_bdevs_list": [ 00:17:14.532 { 00:17:14.532 "name": "pt1", 00:17:14.532 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:14.532 "is_configured": true, 00:17:14.532 "data_offset": 2048, 00:17:14.532 "data_size": 63488 00:17:14.532 }, 00:17:14.532 { 00:17:14.532 "name": null, 00:17:14.532 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:14.532 "is_configured": false, 00:17:14.532 "data_offset": 0, 00:17:14.532 "data_size": 63488 00:17:14.532 }, 00:17:14.532 { 00:17:14.532 "name": null, 00:17:14.532 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:14.532 "is_configured": false, 00:17:14.532 "data_offset": 2048, 00:17:14.532 "data_size": 63488 00:17:14.532 }, 00:17:14.532 { 00:17:14.532 "name": null, 00:17:14.532 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:14.532 "is_configured": false, 00:17:14.532 "data_offset": 2048, 00:17:14.532 "data_size": 63488 00:17:14.532 } 00:17:14.532 ] 00:17:14.532 }' 00:17:14.532 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.532 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.792 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:14.792 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:14.793 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:14.793 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.793 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.793 [2024-10-09 03:19:57.928006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:14.793 [2024-10-09 03:19:57.928047] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.793 [2024-10-09 03:19:57.928062] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:14.793 [2024-10-09 03:19:57.928070] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.793 [2024-10-09 03:19:57.928399] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.793 [2024-10-09 03:19:57.928414] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:14.793 [2024-10-09 03:19:57.928470] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:14.793 [2024-10-09 03:19:57.928484] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:14.793 pt2 00:17:14.793 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.793 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:14.793 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:14.793 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:14.793 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.793 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.793 [2024-10-09 03:19:57.939992] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:14.793 [2024-10-09 03:19:57.940041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.793 [2024-10-09 03:19:57.940059] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:14.793 [2024-10-09 03:19:57.940066] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.793 [2024-10-09 03:19:57.940382] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.793 [2024-10-09 03:19:57.940396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:14.793 [2024-10-09 03:19:57.940447] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:14.793 [2024-10-09 03:19:57.940461] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:14.793 pt3 00:17:14.793 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.793 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:14.793 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:14.793 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:14.793 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.793 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.793 [2024-10-09 03:19:57.951955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:14.793 [2024-10-09 03:19:57.951998] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:14.793 [2024-10-09 03:19:57.952014] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:17:14.793 [2024-10-09 03:19:57.952022] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:14.793 [2024-10-09 03:19:57.952360] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:14.793 [2024-10-09 03:19:57.952374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:14.793 [2024-10-09 03:19:57.952425] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:14.793 [2024-10-09 03:19:57.952440] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:14.793 [2024-10-09 03:19:57.952572] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:14.793 [2024-10-09 03:19:57.952580] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:14.793 [2024-10-09 03:19:57.952853] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:14.793 [2024-10-09 03:19:57.958855] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:14.793 [2024-10-09 03:19:57.958918] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:14.793 [2024-10-09 03:19:57.959085] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:14.793 pt4 00:17:14.793 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.793 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:14.793 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:14.793 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:14.793 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:14.793 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:14.793 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:14.793 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:14.793 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:14.793 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.793 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.793 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.793 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.793 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.793 03:19:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:14.793 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:14.793 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:14.793 03:19:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:14.793 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.793 "name": "raid_bdev1", 00:17:14.793 "uuid": "4286455f-9768-4bbf-896e-99705c274c05", 00:17:14.793 "strip_size_kb": 64, 00:17:14.793 "state": "online", 00:17:14.793 "raid_level": "raid5f", 00:17:14.793 "superblock": true, 00:17:14.793 "num_base_bdevs": 4, 00:17:14.793 "num_base_bdevs_discovered": 4, 00:17:14.793 "num_base_bdevs_operational": 4, 00:17:14.793 "base_bdevs_list": [ 00:17:14.793 { 00:17:14.793 "name": "pt1", 00:17:14.793 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:14.793 "is_configured": true, 00:17:14.793 "data_offset": 2048, 00:17:14.793 "data_size": 63488 00:17:14.793 }, 00:17:14.793 { 00:17:14.793 "name": "pt2", 00:17:14.793 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:14.793 "is_configured": true, 00:17:14.793 "data_offset": 2048, 00:17:14.793 "data_size": 63488 00:17:14.793 }, 00:17:14.793 { 00:17:14.793 "name": "pt3", 00:17:14.793 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:14.793 "is_configured": true, 00:17:14.793 "data_offset": 2048, 00:17:14.793 "data_size": 63488 00:17:14.793 }, 00:17:14.793 { 00:17:14.793 "name": "pt4", 00:17:14.793 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:14.793 "is_configured": true, 00:17:14.793 "data_offset": 2048, 00:17:14.793 "data_size": 63488 00:17:14.793 } 00:17:14.793 ] 00:17:14.793 }' 00:17:14.793 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.793 03:19:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.363 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:15.363 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:15.363 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:15.363 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:15.363 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:15.363 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:15.363 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:15.363 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:15.363 03:19:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.363 03:19:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.363 [2024-10-09 03:19:58.371776] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:15.363 03:19:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.363 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:15.363 "name": "raid_bdev1", 00:17:15.363 "aliases": [ 00:17:15.363 "4286455f-9768-4bbf-896e-99705c274c05" 00:17:15.363 ], 00:17:15.363 "product_name": "Raid Volume", 00:17:15.363 "block_size": 512, 00:17:15.363 "num_blocks": 190464, 00:17:15.363 "uuid": "4286455f-9768-4bbf-896e-99705c274c05", 00:17:15.363 "assigned_rate_limits": { 00:17:15.363 "rw_ios_per_sec": 0, 00:17:15.363 "rw_mbytes_per_sec": 0, 00:17:15.363 "r_mbytes_per_sec": 0, 00:17:15.363 "w_mbytes_per_sec": 0 00:17:15.363 }, 00:17:15.363 "claimed": false, 00:17:15.363 "zoned": false, 00:17:15.363 "supported_io_types": { 00:17:15.363 "read": true, 00:17:15.363 "write": true, 00:17:15.363 "unmap": false, 00:17:15.363 "flush": false, 00:17:15.363 "reset": true, 00:17:15.363 "nvme_admin": false, 00:17:15.363 "nvme_io": false, 00:17:15.363 "nvme_io_md": false, 00:17:15.363 "write_zeroes": true, 00:17:15.363 "zcopy": false, 00:17:15.363 "get_zone_info": false, 00:17:15.363 "zone_management": false, 00:17:15.363 "zone_append": false, 00:17:15.363 "compare": false, 00:17:15.363 "compare_and_write": false, 00:17:15.363 "abort": false, 00:17:15.363 "seek_hole": false, 00:17:15.363 "seek_data": false, 00:17:15.363 "copy": false, 00:17:15.363 "nvme_iov_md": false 00:17:15.363 }, 00:17:15.363 "driver_specific": { 00:17:15.363 "raid": { 00:17:15.363 "uuid": "4286455f-9768-4bbf-896e-99705c274c05", 00:17:15.363 "strip_size_kb": 64, 00:17:15.363 "state": "online", 00:17:15.363 "raid_level": "raid5f", 00:17:15.363 "superblock": true, 00:17:15.363 "num_base_bdevs": 4, 00:17:15.363 "num_base_bdevs_discovered": 4, 00:17:15.363 "num_base_bdevs_operational": 4, 00:17:15.363 "base_bdevs_list": [ 00:17:15.363 { 00:17:15.363 "name": "pt1", 00:17:15.363 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:15.363 "is_configured": true, 00:17:15.363 "data_offset": 2048, 00:17:15.363 "data_size": 63488 00:17:15.363 }, 00:17:15.363 { 00:17:15.363 "name": "pt2", 00:17:15.363 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:15.363 "is_configured": true, 00:17:15.363 "data_offset": 2048, 00:17:15.363 "data_size": 63488 00:17:15.363 }, 00:17:15.363 { 00:17:15.363 "name": "pt3", 00:17:15.363 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:15.363 "is_configured": true, 00:17:15.363 "data_offset": 2048, 00:17:15.363 "data_size": 63488 00:17:15.363 }, 00:17:15.363 { 00:17:15.363 "name": "pt4", 00:17:15.363 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:15.363 "is_configured": true, 00:17:15.363 "data_offset": 2048, 00:17:15.363 "data_size": 63488 00:17:15.363 } 00:17:15.363 ] 00:17:15.363 } 00:17:15.363 } 00:17:15.363 }' 00:17:15.363 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:15.363 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:15.363 pt2 00:17:15.363 pt3 00:17:15.363 pt4' 00:17:15.363 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.363 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:15.363 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:15.363 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.363 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:15.363 03:19:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.363 03:19:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.363 03:19:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.363 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:15.363 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:15.363 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:15.363 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:15.363 03:19:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.363 03:19:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.363 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.363 03:19:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.364 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:15.364 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:15.364 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:15.364 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.364 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:15.364 03:19:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.364 03:19:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.364 03:19:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.364 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:15.364 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:15.364 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:15.364 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:17:15.364 03:19:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.364 03:19:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.364 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:15.364 03:19:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.364 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:15.364 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:15.364 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:15.364 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:15.364 03:19:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.364 03:19:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.364 [2024-10-09 03:19:58.663233] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:15.624 03:19:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.624 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 4286455f-9768-4bbf-896e-99705c274c05 '!=' 4286455f-9768-4bbf-896e-99705c274c05 ']' 00:17:15.624 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:17:15.624 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:15.624 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:15.624 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:15.624 03:19:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.624 03:19:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.624 [2024-10-09 03:19:58.707041] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:15.624 03:19:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.624 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:15.624 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:15.624 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:15.624 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:15.624 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:15.624 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:15.624 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.624 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.624 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.624 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.624 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.624 03:19:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.624 03:19:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.624 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:15.624 03:19:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.624 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.624 "name": "raid_bdev1", 00:17:15.624 "uuid": "4286455f-9768-4bbf-896e-99705c274c05", 00:17:15.624 "strip_size_kb": 64, 00:17:15.624 "state": "online", 00:17:15.624 "raid_level": "raid5f", 00:17:15.624 "superblock": true, 00:17:15.624 "num_base_bdevs": 4, 00:17:15.624 "num_base_bdevs_discovered": 3, 00:17:15.624 "num_base_bdevs_operational": 3, 00:17:15.624 "base_bdevs_list": [ 00:17:15.624 { 00:17:15.624 "name": null, 00:17:15.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:15.624 "is_configured": false, 00:17:15.624 "data_offset": 0, 00:17:15.624 "data_size": 63488 00:17:15.624 }, 00:17:15.624 { 00:17:15.624 "name": "pt2", 00:17:15.624 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:15.624 "is_configured": true, 00:17:15.624 "data_offset": 2048, 00:17:15.624 "data_size": 63488 00:17:15.624 }, 00:17:15.624 { 00:17:15.624 "name": "pt3", 00:17:15.624 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:15.624 "is_configured": true, 00:17:15.624 "data_offset": 2048, 00:17:15.624 "data_size": 63488 00:17:15.624 }, 00:17:15.624 { 00:17:15.624 "name": "pt4", 00:17:15.624 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:15.624 "is_configured": true, 00:17:15.624 "data_offset": 2048, 00:17:15.624 "data_size": 63488 00:17:15.624 } 00:17:15.624 ] 00:17:15.624 }' 00:17:15.624 03:19:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.624 03:19:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.884 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:15.884 03:19:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.884 03:19:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.884 [2024-10-09 03:19:59.106682] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:15.884 [2024-10-09 03:19:59.106709] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:15.884 [2024-10-09 03:19:59.106758] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:15.884 [2024-10-09 03:19:59.106815] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:15.884 [2024-10-09 03:19:59.106823] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:15.884 03:19:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.884 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.884 03:19:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.884 03:19:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.884 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:15.884 03:19:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.884 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:15.884 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:15.884 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:15.884 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:15.884 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:15.884 03:19:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.884 03:19:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.884 03:19:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.884 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:15.884 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:15.884 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:17:15.884 03:19:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.884 03:19:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:15.885 03:19:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.885 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:15.885 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:15.885 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:17:15.885 03:19:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.885 03:19:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.145 03:19:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.145 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:16.145 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:16.145 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:16.145 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:16.145 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:16.145 03:19:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.145 03:19:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.145 [2024-10-09 03:19:59.202510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:16.145 [2024-10-09 03:19:59.202556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.145 [2024-10-09 03:19:59.202572] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:16.145 [2024-10-09 03:19:59.202580] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.145 [2024-10-09 03:19:59.204819] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.145 [2024-10-09 03:19:59.204859] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:16.145 [2024-10-09 03:19:59.204917] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:16.145 [2024-10-09 03:19:59.204958] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:16.145 pt2 00:17:16.145 03:19:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.145 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:16.145 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.145 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:16.145 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:16.145 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:16.145 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:16.145 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.145 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.145 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.145 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.145 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.145 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.145 03:19:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.145 03:19:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.145 03:19:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.145 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.145 "name": "raid_bdev1", 00:17:16.145 "uuid": "4286455f-9768-4bbf-896e-99705c274c05", 00:17:16.145 "strip_size_kb": 64, 00:17:16.145 "state": "configuring", 00:17:16.145 "raid_level": "raid5f", 00:17:16.145 "superblock": true, 00:17:16.145 "num_base_bdevs": 4, 00:17:16.145 "num_base_bdevs_discovered": 1, 00:17:16.145 "num_base_bdevs_operational": 3, 00:17:16.145 "base_bdevs_list": [ 00:17:16.145 { 00:17:16.145 "name": null, 00:17:16.145 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.145 "is_configured": false, 00:17:16.145 "data_offset": 2048, 00:17:16.145 "data_size": 63488 00:17:16.145 }, 00:17:16.145 { 00:17:16.145 "name": "pt2", 00:17:16.145 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:16.145 "is_configured": true, 00:17:16.145 "data_offset": 2048, 00:17:16.145 "data_size": 63488 00:17:16.145 }, 00:17:16.145 { 00:17:16.145 "name": null, 00:17:16.145 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:16.145 "is_configured": false, 00:17:16.145 "data_offset": 2048, 00:17:16.145 "data_size": 63488 00:17:16.145 }, 00:17:16.145 { 00:17:16.145 "name": null, 00:17:16.145 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:16.145 "is_configured": false, 00:17:16.145 "data_offset": 2048, 00:17:16.145 "data_size": 63488 00:17:16.145 } 00:17:16.145 ] 00:17:16.145 }' 00:17:16.145 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.145 03:19:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.406 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:16.406 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:16.406 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:16.406 03:19:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.406 03:19:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.406 [2024-10-09 03:19:59.649788] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:16.406 [2024-10-09 03:19:59.649830] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.406 [2024-10-09 03:19:59.649855] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:16.406 [2024-10-09 03:19:59.649863] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.406 [2024-10-09 03:19:59.650187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.406 [2024-10-09 03:19:59.650211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:16.406 [2024-10-09 03:19:59.650267] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:16.406 [2024-10-09 03:19:59.650289] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:16.406 pt3 00:17:16.406 03:19:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.406 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:16.406 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.406 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:16.406 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:16.406 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:16.406 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:16.406 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.406 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.406 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.406 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.406 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.406 03:19:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.406 03:19:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.406 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.406 03:19:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.406 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.406 "name": "raid_bdev1", 00:17:16.406 "uuid": "4286455f-9768-4bbf-896e-99705c274c05", 00:17:16.406 "strip_size_kb": 64, 00:17:16.406 "state": "configuring", 00:17:16.406 "raid_level": "raid5f", 00:17:16.406 "superblock": true, 00:17:16.406 "num_base_bdevs": 4, 00:17:16.406 "num_base_bdevs_discovered": 2, 00:17:16.406 "num_base_bdevs_operational": 3, 00:17:16.406 "base_bdevs_list": [ 00:17:16.406 { 00:17:16.406 "name": null, 00:17:16.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.406 "is_configured": false, 00:17:16.406 "data_offset": 2048, 00:17:16.406 "data_size": 63488 00:17:16.406 }, 00:17:16.406 { 00:17:16.406 "name": "pt2", 00:17:16.406 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:16.406 "is_configured": true, 00:17:16.406 "data_offset": 2048, 00:17:16.406 "data_size": 63488 00:17:16.406 }, 00:17:16.406 { 00:17:16.406 "name": "pt3", 00:17:16.406 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:16.406 "is_configured": true, 00:17:16.406 "data_offset": 2048, 00:17:16.406 "data_size": 63488 00:17:16.406 }, 00:17:16.406 { 00:17:16.406 "name": null, 00:17:16.406 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:16.406 "is_configured": false, 00:17:16.406 "data_offset": 2048, 00:17:16.406 "data_size": 63488 00:17:16.406 } 00:17:16.406 ] 00:17:16.406 }' 00:17:16.406 03:19:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.406 03:19:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.976 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:16.976 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:16.976 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:17:16.976 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:16.976 03:20:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.976 03:20:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.976 [2024-10-09 03:20:00.085021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:16.976 [2024-10-09 03:20:00.085068] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:16.976 [2024-10-09 03:20:00.085085] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:17:16.976 [2024-10-09 03:20:00.085093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:16.976 [2024-10-09 03:20:00.085430] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:16.976 [2024-10-09 03:20:00.085456] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:16.976 [2024-10-09 03:20:00.085506] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:16.976 [2024-10-09 03:20:00.085522] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:16.976 [2024-10-09 03:20:00.085619] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:16.976 [2024-10-09 03:20:00.085627] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:16.976 [2024-10-09 03:20:00.085873] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:16.976 [2024-10-09 03:20:00.092012] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:16.976 [2024-10-09 03:20:00.092036] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:16.976 [2024-10-09 03:20:00.092288] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.976 pt4 00:17:16.976 03:20:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.976 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:16.976 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:16.976 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.976 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:16.976 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:16.976 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:16.976 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.976 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.976 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.976 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.976 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.976 03:20:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:16.976 03:20:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:16.976 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:16.976 03:20:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:16.976 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.976 "name": "raid_bdev1", 00:17:16.976 "uuid": "4286455f-9768-4bbf-896e-99705c274c05", 00:17:16.976 "strip_size_kb": 64, 00:17:16.976 "state": "online", 00:17:16.976 "raid_level": "raid5f", 00:17:16.976 "superblock": true, 00:17:16.976 "num_base_bdevs": 4, 00:17:16.976 "num_base_bdevs_discovered": 3, 00:17:16.976 "num_base_bdevs_operational": 3, 00:17:16.976 "base_bdevs_list": [ 00:17:16.976 { 00:17:16.976 "name": null, 00:17:16.976 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:16.976 "is_configured": false, 00:17:16.976 "data_offset": 2048, 00:17:16.976 "data_size": 63488 00:17:16.976 }, 00:17:16.976 { 00:17:16.976 "name": "pt2", 00:17:16.976 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:16.976 "is_configured": true, 00:17:16.976 "data_offset": 2048, 00:17:16.976 "data_size": 63488 00:17:16.976 }, 00:17:16.976 { 00:17:16.976 "name": "pt3", 00:17:16.976 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:16.976 "is_configured": true, 00:17:16.976 "data_offset": 2048, 00:17:16.976 "data_size": 63488 00:17:16.976 }, 00:17:16.976 { 00:17:16.976 "name": "pt4", 00:17:16.976 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:16.976 "is_configured": true, 00:17:16.976 "data_offset": 2048, 00:17:16.976 "data_size": 63488 00:17:16.976 } 00:17:16.976 ] 00:17:16.976 }' 00:17:16.976 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.976 03:20:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.237 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:17.237 03:20:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.237 03:20:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.237 [2024-10-09 03:20:00.527218] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:17.237 [2024-10-09 03:20:00.527243] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:17.237 [2024-10-09 03:20:00.527294] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:17.237 [2024-10-09 03:20:00.527349] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:17.237 [2024-10-09 03:20:00.527363] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:17.237 03:20:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.237 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.237 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:17.237 03:20:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.237 03:20:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.496 03:20:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.496 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:17.496 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:17.496 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:17:17.496 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:17:17.496 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:17:17.496 03:20:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.496 03:20:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.496 03:20:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.496 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:17.496 03:20:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.496 03:20:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.497 [2024-10-09 03:20:00.599091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:17.497 [2024-10-09 03:20:00.599157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:17.497 [2024-10-09 03:20:00.599171] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:17:17.497 [2024-10-09 03:20:00.599182] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:17.497 [2024-10-09 03:20:00.601547] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:17.497 [2024-10-09 03:20:00.601583] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:17.497 [2024-10-09 03:20:00.601638] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:17.497 [2024-10-09 03:20:00.601684] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:17.497 [2024-10-09 03:20:00.601795] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:17.497 [2024-10-09 03:20:00.601809] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:17.497 [2024-10-09 03:20:00.601822] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:17.497 [2024-10-09 03:20:00.601882] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:17.497 [2024-10-09 03:20:00.601973] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:17.497 pt1 00:17:17.497 03:20:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.497 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:17:17.497 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:17.497 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:17.497 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:17.497 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:17.497 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:17.497 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:17.497 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:17.497 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:17.497 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:17.497 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:17.497 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:17.497 03:20:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.497 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:17.497 03:20:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.497 03:20:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.497 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:17.497 "name": "raid_bdev1", 00:17:17.497 "uuid": "4286455f-9768-4bbf-896e-99705c274c05", 00:17:17.497 "strip_size_kb": 64, 00:17:17.497 "state": "configuring", 00:17:17.497 "raid_level": "raid5f", 00:17:17.497 "superblock": true, 00:17:17.497 "num_base_bdevs": 4, 00:17:17.497 "num_base_bdevs_discovered": 2, 00:17:17.497 "num_base_bdevs_operational": 3, 00:17:17.497 "base_bdevs_list": [ 00:17:17.497 { 00:17:17.497 "name": null, 00:17:17.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.497 "is_configured": false, 00:17:17.497 "data_offset": 2048, 00:17:17.497 "data_size": 63488 00:17:17.497 }, 00:17:17.497 { 00:17:17.497 "name": "pt2", 00:17:17.497 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:17.497 "is_configured": true, 00:17:17.497 "data_offset": 2048, 00:17:17.497 "data_size": 63488 00:17:17.497 }, 00:17:17.497 { 00:17:17.497 "name": "pt3", 00:17:17.497 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:17.497 "is_configured": true, 00:17:17.497 "data_offset": 2048, 00:17:17.497 "data_size": 63488 00:17:17.497 }, 00:17:17.497 { 00:17:17.497 "name": null, 00:17:17.497 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:17.497 "is_configured": false, 00:17:17.497 "data_offset": 2048, 00:17:17.497 "data_size": 63488 00:17:17.497 } 00:17:17.497 ] 00:17:17.497 }' 00:17:17.497 03:20:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:17.497 03:20:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.756 03:20:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:17.756 03:20:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:17:17.756 03:20:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.756 03:20:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:17.756 03:20:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.017 03:20:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:17:18.017 03:20:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:17:18.017 03:20:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.017 03:20:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.017 [2024-10-09 03:20:01.070350] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:17:18.017 [2024-10-09 03:20:01.070390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:18.017 [2024-10-09 03:20:01.070409] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:17:18.017 [2024-10-09 03:20:01.070417] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:18.017 [2024-10-09 03:20:01.070746] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:18.017 [2024-10-09 03:20:01.070768] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:17:18.017 [2024-10-09 03:20:01.070821] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:17:18.017 [2024-10-09 03:20:01.070851] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:17:18.017 [2024-10-09 03:20:01.070965] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:18.017 [2024-10-09 03:20:01.070979] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:18.017 [2024-10-09 03:20:01.071220] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:18.017 [2024-10-09 03:20:01.077509] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:18.017 [2024-10-09 03:20:01.077535] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:18.017 [2024-10-09 03:20:01.077759] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:18.017 pt4 00:17:18.017 03:20:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.017 03:20:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:18.017 03:20:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:18.017 03:20:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:18.017 03:20:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:18.017 03:20:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:18.017 03:20:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:18.017 03:20:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:18.017 03:20:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:18.017 03:20:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:18.017 03:20:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:18.017 03:20:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:18.017 03:20:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:18.017 03:20:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.017 03:20:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.017 03:20:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.017 03:20:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:18.017 "name": "raid_bdev1", 00:17:18.017 "uuid": "4286455f-9768-4bbf-896e-99705c274c05", 00:17:18.017 "strip_size_kb": 64, 00:17:18.017 "state": "online", 00:17:18.017 "raid_level": "raid5f", 00:17:18.017 "superblock": true, 00:17:18.017 "num_base_bdevs": 4, 00:17:18.017 "num_base_bdevs_discovered": 3, 00:17:18.017 "num_base_bdevs_operational": 3, 00:17:18.017 "base_bdevs_list": [ 00:17:18.017 { 00:17:18.017 "name": null, 00:17:18.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:18.017 "is_configured": false, 00:17:18.017 "data_offset": 2048, 00:17:18.017 "data_size": 63488 00:17:18.017 }, 00:17:18.017 { 00:17:18.017 "name": "pt2", 00:17:18.017 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:18.017 "is_configured": true, 00:17:18.017 "data_offset": 2048, 00:17:18.017 "data_size": 63488 00:17:18.017 }, 00:17:18.017 { 00:17:18.017 "name": "pt3", 00:17:18.017 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:18.017 "is_configured": true, 00:17:18.017 "data_offset": 2048, 00:17:18.017 "data_size": 63488 00:17:18.017 }, 00:17:18.017 { 00:17:18.017 "name": "pt4", 00:17:18.017 "uuid": "00000000-0000-0000-0000-000000000004", 00:17:18.017 "is_configured": true, 00:17:18.017 "data_offset": 2048, 00:17:18.017 "data_size": 63488 00:17:18.017 } 00:17:18.017 ] 00:17:18.017 }' 00:17:18.017 03:20:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:18.017 03:20:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.276 03:20:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:18.276 03:20:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:18.276 03:20:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.276 03:20:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.276 03:20:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.537 03:20:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:18.537 03:20:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:18.537 03:20:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.537 03:20:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:18.537 03:20:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.537 [2024-10-09 03:20:01.609908] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:18.537 03:20:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.537 03:20:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 4286455f-9768-4bbf-896e-99705c274c05 '!=' 4286455f-9768-4bbf-896e-99705c274c05 ']' 00:17:18.537 03:20:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84392 00:17:18.537 03:20:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@950 -- # '[' -z 84392 ']' 00:17:18.537 03:20:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # kill -0 84392 00:17:18.537 03:20:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # uname 00:17:18.537 03:20:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:18.537 03:20:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84392 00:17:18.537 03:20:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:18.537 03:20:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:18.537 killing process with pid 84392 00:17:18.537 03:20:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84392' 00:17:18.537 03:20:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@969 -- # kill 84392 00:17:18.537 [2024-10-09 03:20:01.681736] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:18.537 [2024-10-09 03:20:01.681807] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:18.537 [2024-10-09 03:20:01.681885] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:18.537 [2024-10-09 03:20:01.681898] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:18.537 03:20:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@974 -- # wait 84392 00:17:18.797 [2024-10-09 03:20:02.092108] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:20.178 03:20:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:20.178 00:17:20.178 real 0m8.490s 00:17:20.178 user 0m13.006s 00:17:20.178 sys 0m1.626s 00:17:20.178 03:20:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:20.178 03:20:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.178 ************************************ 00:17:20.178 END TEST raid5f_superblock_test 00:17:20.178 ************************************ 00:17:20.178 03:20:03 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:17:20.178 03:20:03 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:17:20.178 03:20:03 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:20.178 03:20:03 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:20.178 03:20:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:20.178 ************************************ 00:17:20.178 START TEST raid5f_rebuild_test 00:17:20.178 ************************************ 00:17:20.178 03:20:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 false false true 00:17:20.178 03:20:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:20.178 03:20:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:20.178 03:20:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:20.178 03:20:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:20.178 03:20:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:20.438 03:20:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:20.438 03:20:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:20.438 03:20:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:20.438 03:20:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:20.438 03:20:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:20.438 03:20:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:20.438 03:20:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:20.438 03:20:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:20.438 03:20:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:20.438 03:20:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:20.438 03:20:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:20.438 03:20:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:20.438 03:20:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:20.438 03:20:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:20.438 03:20:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:20.438 03:20:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:20.438 03:20:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:20.438 03:20:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:20.438 03:20:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:20.438 03:20:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:20.438 03:20:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:20.438 03:20:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:20.438 03:20:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:20.438 03:20:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:20.438 03:20:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:20.439 03:20:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:20.439 03:20:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84878 00:17:20.439 03:20:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:20.439 03:20:03 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84878 00:17:20.439 03:20:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@831 -- # '[' -z 84878 ']' 00:17:20.439 03:20:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.439 03:20:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:20.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.439 03:20:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.439 03:20:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:20.439 03:20:03 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.439 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:20.439 Zero copy mechanism will not be used. 00:17:20.439 [2024-10-09 03:20:03.575359] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:17:20.439 [2024-10-09 03:20:03.575473] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84878 ] 00:17:20.439 [2024-10-09 03:20:03.738978] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:20.698 [2024-10-09 03:20:03.977509] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:20.958 [2024-10-09 03:20:04.207692] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:20.958 [2024-10-09 03:20:04.207739] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:21.218 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:21.218 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # return 0 00:17:21.218 03:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:21.218 03:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:21.218 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.218 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.218 BaseBdev1_malloc 00:17:21.218 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.218 03:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:21.218 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.218 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.218 [2024-10-09 03:20:04.445435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:21.218 [2024-10-09 03:20:04.445512] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.218 [2024-10-09 03:20:04.445536] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:21.218 [2024-10-09 03:20:04.445552] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.218 [2024-10-09 03:20:04.447869] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.218 [2024-10-09 03:20:04.447905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:21.218 BaseBdev1 00:17:21.218 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.218 03:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:21.218 03:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:21.218 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.218 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.479 BaseBdev2_malloc 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.479 [2024-10-09 03:20:04.543849] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:21.479 [2024-10-09 03:20:04.543908] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.479 [2024-10-09 03:20:04.543929] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:21.479 [2024-10-09 03:20:04.543943] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.479 [2024-10-09 03:20:04.546171] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.479 [2024-10-09 03:20:04.546208] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:21.479 BaseBdev2 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.479 BaseBdev3_malloc 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.479 [2024-10-09 03:20:04.606459] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:21.479 [2024-10-09 03:20:04.606509] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.479 [2024-10-09 03:20:04.606530] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:21.479 [2024-10-09 03:20:04.606542] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.479 [2024-10-09 03:20:04.608740] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.479 [2024-10-09 03:20:04.608792] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:21.479 BaseBdev3 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.479 BaseBdev4_malloc 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.479 [2024-10-09 03:20:04.668642] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:21.479 [2024-10-09 03:20:04.668703] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.479 [2024-10-09 03:20:04.668723] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:21.479 [2024-10-09 03:20:04.668735] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.479 [2024-10-09 03:20:04.670973] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.479 [2024-10-09 03:20:04.671010] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:21.479 BaseBdev4 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.479 spare_malloc 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.479 spare_delay 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.479 [2024-10-09 03:20:04.737198] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:21.479 [2024-10-09 03:20:04.737253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.479 [2024-10-09 03:20:04.737271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:21.479 [2024-10-09 03:20:04.737282] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.479 [2024-10-09 03:20:04.739475] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.479 [2024-10-09 03:20:04.739513] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:21.479 spare 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.479 [2024-10-09 03:20:04.749245] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:21.479 [2024-10-09 03:20:04.751191] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:21.479 [2024-10-09 03:20:04.751253] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:21.479 [2024-10-09 03:20:04.751299] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:21.479 [2024-10-09 03:20:04.751383] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:21.479 [2024-10-09 03:20:04.751393] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:17:21.479 [2024-10-09 03:20:04.751625] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:21.479 [2024-10-09 03:20:04.758063] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:21.479 [2024-10-09 03:20:04.758120] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:21.479 [2024-10-09 03:20:04.758335] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.479 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.739 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.739 03:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.739 "name": "raid_bdev1", 00:17:21.739 "uuid": "3d3b987b-80ea-43a7-a242-beee6ababf93", 00:17:21.739 "strip_size_kb": 64, 00:17:21.739 "state": "online", 00:17:21.739 "raid_level": "raid5f", 00:17:21.739 "superblock": false, 00:17:21.739 "num_base_bdevs": 4, 00:17:21.739 "num_base_bdevs_discovered": 4, 00:17:21.739 "num_base_bdevs_operational": 4, 00:17:21.739 "base_bdevs_list": [ 00:17:21.739 { 00:17:21.739 "name": "BaseBdev1", 00:17:21.739 "uuid": "6b9216e8-047d-5eac-99ec-9ee053119fe3", 00:17:21.739 "is_configured": true, 00:17:21.739 "data_offset": 0, 00:17:21.739 "data_size": 65536 00:17:21.739 }, 00:17:21.739 { 00:17:21.739 "name": "BaseBdev2", 00:17:21.739 "uuid": "0267683a-ba8e-5a42-b9fb-def963e1262f", 00:17:21.739 "is_configured": true, 00:17:21.739 "data_offset": 0, 00:17:21.739 "data_size": 65536 00:17:21.739 }, 00:17:21.739 { 00:17:21.739 "name": "BaseBdev3", 00:17:21.739 "uuid": "d912c0bd-a41e-598a-bef9-5d07ee50c133", 00:17:21.739 "is_configured": true, 00:17:21.739 "data_offset": 0, 00:17:21.739 "data_size": 65536 00:17:21.739 }, 00:17:21.739 { 00:17:21.739 "name": "BaseBdev4", 00:17:21.739 "uuid": "d8061bdb-73c4-5585-96ab-26a4390c6c6c", 00:17:21.739 "is_configured": true, 00:17:21.739 "data_offset": 0, 00:17:21.739 "data_size": 65536 00:17:21.739 } 00:17:21.739 ] 00:17:21.739 }' 00:17:21.739 03:20:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.739 03:20:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.999 03:20:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:21.999 03:20:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.999 03:20:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.999 03:20:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:21.999 [2024-10-09 03:20:05.162384] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:21.999 03:20:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.999 03:20:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:17:21.999 03:20:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.999 03:20:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:21.999 03:20:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.999 03:20:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:21.999 03:20:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:21.999 03:20:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:21.999 03:20:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:21.999 03:20:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:21.999 03:20:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:21.999 03:20:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:21.999 03:20:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:21.999 03:20:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:21.999 03:20:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:21.999 03:20:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:21.999 03:20:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:21.999 03:20:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:21.999 03:20:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:21.999 03:20:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:21.999 03:20:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:22.259 [2024-10-09 03:20:05.425983] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:22.259 /dev/nbd0 00:17:22.259 03:20:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:22.259 03:20:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:22.259 03:20:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:22.259 03:20:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:17:22.259 03:20:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:22.259 03:20:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:22.259 03:20:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:22.259 03:20:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:17:22.259 03:20:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:22.259 03:20:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:22.259 03:20:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:22.259 1+0 records in 00:17:22.259 1+0 records out 00:17:22.259 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440729 s, 9.3 MB/s 00:17:22.259 03:20:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:22.259 03:20:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:17:22.259 03:20:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:22.259 03:20:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:22.259 03:20:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:17:22.259 03:20:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:22.259 03:20:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:22.259 03:20:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:22.259 03:20:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:22.259 03:20:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:22.259 03:20:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:17:22.830 512+0 records in 00:17:22.830 512+0 records out 00:17:22.830 100663296 bytes (101 MB, 96 MiB) copied, 0.480917 s, 209 MB/s 00:17:22.830 03:20:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:22.830 03:20:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:22.830 03:20:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:22.830 03:20:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:22.830 03:20:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:22.830 03:20:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:22.830 03:20:05 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:23.090 03:20:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:23.090 03:20:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:23.090 [2024-10-09 03:20:06.195261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:23.090 03:20:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:23.090 03:20:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:23.090 03:20:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:23.090 03:20:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:23.090 03:20:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:23.090 03:20:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:23.090 03:20:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:23.090 03:20:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.090 03:20:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.090 [2024-10-09 03:20:06.208765] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:23.090 03:20:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.090 03:20:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:23.090 03:20:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.090 03:20:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:23.090 03:20:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:23.090 03:20:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:23.090 03:20:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:23.090 03:20:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.090 03:20:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.091 03:20:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.091 03:20:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.091 03:20:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.091 03:20:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.091 03:20:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.091 03:20:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.091 03:20:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.091 03:20:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.091 "name": "raid_bdev1", 00:17:23.091 "uuid": "3d3b987b-80ea-43a7-a242-beee6ababf93", 00:17:23.091 "strip_size_kb": 64, 00:17:23.091 "state": "online", 00:17:23.091 "raid_level": "raid5f", 00:17:23.091 "superblock": false, 00:17:23.091 "num_base_bdevs": 4, 00:17:23.091 "num_base_bdevs_discovered": 3, 00:17:23.091 "num_base_bdevs_operational": 3, 00:17:23.091 "base_bdevs_list": [ 00:17:23.091 { 00:17:23.091 "name": null, 00:17:23.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.091 "is_configured": false, 00:17:23.091 "data_offset": 0, 00:17:23.091 "data_size": 65536 00:17:23.091 }, 00:17:23.091 { 00:17:23.091 "name": "BaseBdev2", 00:17:23.091 "uuid": "0267683a-ba8e-5a42-b9fb-def963e1262f", 00:17:23.091 "is_configured": true, 00:17:23.091 "data_offset": 0, 00:17:23.091 "data_size": 65536 00:17:23.091 }, 00:17:23.091 { 00:17:23.091 "name": "BaseBdev3", 00:17:23.091 "uuid": "d912c0bd-a41e-598a-bef9-5d07ee50c133", 00:17:23.091 "is_configured": true, 00:17:23.091 "data_offset": 0, 00:17:23.091 "data_size": 65536 00:17:23.091 }, 00:17:23.091 { 00:17:23.091 "name": "BaseBdev4", 00:17:23.091 "uuid": "d8061bdb-73c4-5585-96ab-26a4390c6c6c", 00:17:23.091 "is_configured": true, 00:17:23.091 "data_offset": 0, 00:17:23.091 "data_size": 65536 00:17:23.091 } 00:17:23.091 ] 00:17:23.091 }' 00:17:23.091 03:20:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.091 03:20:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.350 03:20:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:23.350 03:20:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.350 03:20:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.350 [2024-10-09 03:20:06.647955] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:23.610 [2024-10-09 03:20:06.661782] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:17:23.610 03:20:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.610 03:20:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:23.610 [2024-10-09 03:20:06.670749] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:24.554 03:20:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:24.554 03:20:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:24.554 03:20:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:24.554 03:20:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:24.554 03:20:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:24.554 03:20:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.554 03:20:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.554 03:20:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.554 03:20:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.554 03:20:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.554 03:20:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:24.554 "name": "raid_bdev1", 00:17:24.554 "uuid": "3d3b987b-80ea-43a7-a242-beee6ababf93", 00:17:24.554 "strip_size_kb": 64, 00:17:24.554 "state": "online", 00:17:24.554 "raid_level": "raid5f", 00:17:24.554 "superblock": false, 00:17:24.554 "num_base_bdevs": 4, 00:17:24.554 "num_base_bdevs_discovered": 4, 00:17:24.554 "num_base_bdevs_operational": 4, 00:17:24.554 "process": { 00:17:24.554 "type": "rebuild", 00:17:24.554 "target": "spare", 00:17:24.554 "progress": { 00:17:24.554 "blocks": 19200, 00:17:24.554 "percent": 9 00:17:24.554 } 00:17:24.554 }, 00:17:24.554 "base_bdevs_list": [ 00:17:24.554 { 00:17:24.554 "name": "spare", 00:17:24.554 "uuid": "89391823-c5cf-59c8-98bc-2e5037869cf8", 00:17:24.554 "is_configured": true, 00:17:24.554 "data_offset": 0, 00:17:24.554 "data_size": 65536 00:17:24.554 }, 00:17:24.554 { 00:17:24.554 "name": "BaseBdev2", 00:17:24.554 "uuid": "0267683a-ba8e-5a42-b9fb-def963e1262f", 00:17:24.554 "is_configured": true, 00:17:24.554 "data_offset": 0, 00:17:24.554 "data_size": 65536 00:17:24.554 }, 00:17:24.554 { 00:17:24.554 "name": "BaseBdev3", 00:17:24.554 "uuid": "d912c0bd-a41e-598a-bef9-5d07ee50c133", 00:17:24.554 "is_configured": true, 00:17:24.554 "data_offset": 0, 00:17:24.554 "data_size": 65536 00:17:24.554 }, 00:17:24.554 { 00:17:24.554 "name": "BaseBdev4", 00:17:24.554 "uuid": "d8061bdb-73c4-5585-96ab-26a4390c6c6c", 00:17:24.554 "is_configured": true, 00:17:24.554 "data_offset": 0, 00:17:24.554 "data_size": 65536 00:17:24.554 } 00:17:24.554 ] 00:17:24.554 }' 00:17:24.554 03:20:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:24.554 03:20:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:24.554 03:20:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:24.554 03:20:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:24.554 03:20:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:24.554 03:20:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.554 03:20:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.554 [2024-10-09 03:20:07.821571] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:24.814 [2024-10-09 03:20:07.877475] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:24.814 [2024-10-09 03:20:07.877534] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.814 [2024-10-09 03:20:07.877550] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:24.814 [2024-10-09 03:20:07.877560] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:24.814 03:20:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.814 03:20:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:24.814 03:20:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.814 03:20:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.814 03:20:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:24.814 03:20:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.814 03:20:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:24.814 03:20:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.814 03:20:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.814 03:20:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.814 03:20:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.814 03:20:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.814 03:20:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.814 03:20:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:24.814 03:20:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.814 03:20:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:24.814 03:20:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.814 "name": "raid_bdev1", 00:17:24.814 "uuid": "3d3b987b-80ea-43a7-a242-beee6ababf93", 00:17:24.814 "strip_size_kb": 64, 00:17:24.814 "state": "online", 00:17:24.814 "raid_level": "raid5f", 00:17:24.814 "superblock": false, 00:17:24.814 "num_base_bdevs": 4, 00:17:24.814 "num_base_bdevs_discovered": 3, 00:17:24.814 "num_base_bdevs_operational": 3, 00:17:24.814 "base_bdevs_list": [ 00:17:24.814 { 00:17:24.814 "name": null, 00:17:24.814 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.814 "is_configured": false, 00:17:24.814 "data_offset": 0, 00:17:24.814 "data_size": 65536 00:17:24.814 }, 00:17:24.814 { 00:17:24.814 "name": "BaseBdev2", 00:17:24.814 "uuid": "0267683a-ba8e-5a42-b9fb-def963e1262f", 00:17:24.814 "is_configured": true, 00:17:24.814 "data_offset": 0, 00:17:24.814 "data_size": 65536 00:17:24.814 }, 00:17:24.814 { 00:17:24.814 "name": "BaseBdev3", 00:17:24.814 "uuid": "d912c0bd-a41e-598a-bef9-5d07ee50c133", 00:17:24.814 "is_configured": true, 00:17:24.814 "data_offset": 0, 00:17:24.814 "data_size": 65536 00:17:24.814 }, 00:17:24.814 { 00:17:24.814 "name": "BaseBdev4", 00:17:24.814 "uuid": "d8061bdb-73c4-5585-96ab-26a4390c6c6c", 00:17:24.814 "is_configured": true, 00:17:24.814 "data_offset": 0, 00:17:24.814 "data_size": 65536 00:17:24.814 } 00:17:24.814 ] 00:17:24.814 }' 00:17:24.814 03:20:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.814 03:20:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.074 03:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:25.074 03:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:25.074 03:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:25.074 03:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:25.074 03:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:25.074 03:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.074 03:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.074 03:20:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.074 03:20:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.333 03:20:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.333 03:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:25.333 "name": "raid_bdev1", 00:17:25.333 "uuid": "3d3b987b-80ea-43a7-a242-beee6ababf93", 00:17:25.333 "strip_size_kb": 64, 00:17:25.333 "state": "online", 00:17:25.333 "raid_level": "raid5f", 00:17:25.333 "superblock": false, 00:17:25.333 "num_base_bdevs": 4, 00:17:25.333 "num_base_bdevs_discovered": 3, 00:17:25.333 "num_base_bdevs_operational": 3, 00:17:25.333 "base_bdevs_list": [ 00:17:25.333 { 00:17:25.333 "name": null, 00:17:25.333 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.333 "is_configured": false, 00:17:25.333 "data_offset": 0, 00:17:25.333 "data_size": 65536 00:17:25.333 }, 00:17:25.333 { 00:17:25.333 "name": "BaseBdev2", 00:17:25.333 "uuid": "0267683a-ba8e-5a42-b9fb-def963e1262f", 00:17:25.333 "is_configured": true, 00:17:25.333 "data_offset": 0, 00:17:25.333 "data_size": 65536 00:17:25.333 }, 00:17:25.333 { 00:17:25.333 "name": "BaseBdev3", 00:17:25.333 "uuid": "d912c0bd-a41e-598a-bef9-5d07ee50c133", 00:17:25.333 "is_configured": true, 00:17:25.333 "data_offset": 0, 00:17:25.333 "data_size": 65536 00:17:25.333 }, 00:17:25.333 { 00:17:25.333 "name": "BaseBdev4", 00:17:25.333 "uuid": "d8061bdb-73c4-5585-96ab-26a4390c6c6c", 00:17:25.333 "is_configured": true, 00:17:25.333 "data_offset": 0, 00:17:25.333 "data_size": 65536 00:17:25.333 } 00:17:25.333 ] 00:17:25.333 }' 00:17:25.333 03:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:25.333 03:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:25.333 03:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:25.333 03:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:25.333 03:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:25.333 03:20:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.333 03:20:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.333 [2024-10-09 03:20:08.510683] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:25.333 [2024-10-09 03:20:08.523818] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:17:25.333 03:20:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.333 03:20:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:25.333 [2024-10-09 03:20:08.532457] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:26.271 03:20:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:26.271 03:20:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.271 03:20:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:26.271 03:20:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:26.271 03:20:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.271 03:20:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.271 03:20:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.271 03:20:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.271 03:20:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.271 03:20:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.531 03:20:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.531 "name": "raid_bdev1", 00:17:26.531 "uuid": "3d3b987b-80ea-43a7-a242-beee6ababf93", 00:17:26.531 "strip_size_kb": 64, 00:17:26.531 "state": "online", 00:17:26.531 "raid_level": "raid5f", 00:17:26.531 "superblock": false, 00:17:26.531 "num_base_bdevs": 4, 00:17:26.531 "num_base_bdevs_discovered": 4, 00:17:26.531 "num_base_bdevs_operational": 4, 00:17:26.531 "process": { 00:17:26.531 "type": "rebuild", 00:17:26.531 "target": "spare", 00:17:26.531 "progress": { 00:17:26.531 "blocks": 19200, 00:17:26.531 "percent": 9 00:17:26.531 } 00:17:26.531 }, 00:17:26.531 "base_bdevs_list": [ 00:17:26.531 { 00:17:26.531 "name": "spare", 00:17:26.531 "uuid": "89391823-c5cf-59c8-98bc-2e5037869cf8", 00:17:26.531 "is_configured": true, 00:17:26.531 "data_offset": 0, 00:17:26.531 "data_size": 65536 00:17:26.531 }, 00:17:26.531 { 00:17:26.531 "name": "BaseBdev2", 00:17:26.531 "uuid": "0267683a-ba8e-5a42-b9fb-def963e1262f", 00:17:26.531 "is_configured": true, 00:17:26.531 "data_offset": 0, 00:17:26.531 "data_size": 65536 00:17:26.531 }, 00:17:26.531 { 00:17:26.531 "name": "BaseBdev3", 00:17:26.531 "uuid": "d912c0bd-a41e-598a-bef9-5d07ee50c133", 00:17:26.531 "is_configured": true, 00:17:26.531 "data_offset": 0, 00:17:26.531 "data_size": 65536 00:17:26.531 }, 00:17:26.531 { 00:17:26.531 "name": "BaseBdev4", 00:17:26.531 "uuid": "d8061bdb-73c4-5585-96ab-26a4390c6c6c", 00:17:26.531 "is_configured": true, 00:17:26.531 "data_offset": 0, 00:17:26.531 "data_size": 65536 00:17:26.531 } 00:17:26.531 ] 00:17:26.531 }' 00:17:26.531 03:20:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.531 03:20:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:26.531 03:20:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.531 03:20:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:26.531 03:20:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:26.531 03:20:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:26.531 03:20:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:26.531 03:20:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=636 00:17:26.531 03:20:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:26.531 03:20:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:26.531 03:20:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:26.531 03:20:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:26.531 03:20:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:26.531 03:20:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:26.531 03:20:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:26.531 03:20:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.531 03:20:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:26.531 03:20:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:26.531 03:20:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.531 03:20:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:26.531 "name": "raid_bdev1", 00:17:26.531 "uuid": "3d3b987b-80ea-43a7-a242-beee6ababf93", 00:17:26.531 "strip_size_kb": 64, 00:17:26.531 "state": "online", 00:17:26.531 "raid_level": "raid5f", 00:17:26.531 "superblock": false, 00:17:26.531 "num_base_bdevs": 4, 00:17:26.531 "num_base_bdevs_discovered": 4, 00:17:26.531 "num_base_bdevs_operational": 4, 00:17:26.531 "process": { 00:17:26.531 "type": "rebuild", 00:17:26.531 "target": "spare", 00:17:26.531 "progress": { 00:17:26.531 "blocks": 21120, 00:17:26.531 "percent": 10 00:17:26.531 } 00:17:26.531 }, 00:17:26.531 "base_bdevs_list": [ 00:17:26.531 { 00:17:26.531 "name": "spare", 00:17:26.531 "uuid": "89391823-c5cf-59c8-98bc-2e5037869cf8", 00:17:26.531 "is_configured": true, 00:17:26.531 "data_offset": 0, 00:17:26.531 "data_size": 65536 00:17:26.532 }, 00:17:26.532 { 00:17:26.532 "name": "BaseBdev2", 00:17:26.532 "uuid": "0267683a-ba8e-5a42-b9fb-def963e1262f", 00:17:26.532 "is_configured": true, 00:17:26.532 "data_offset": 0, 00:17:26.532 "data_size": 65536 00:17:26.532 }, 00:17:26.532 { 00:17:26.532 "name": "BaseBdev3", 00:17:26.532 "uuid": "d912c0bd-a41e-598a-bef9-5d07ee50c133", 00:17:26.532 "is_configured": true, 00:17:26.532 "data_offset": 0, 00:17:26.532 "data_size": 65536 00:17:26.532 }, 00:17:26.532 { 00:17:26.532 "name": "BaseBdev4", 00:17:26.532 "uuid": "d8061bdb-73c4-5585-96ab-26a4390c6c6c", 00:17:26.532 "is_configured": true, 00:17:26.532 "data_offset": 0, 00:17:26.532 "data_size": 65536 00:17:26.532 } 00:17:26.532 ] 00:17:26.532 }' 00:17:26.532 03:20:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:26.532 03:20:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:26.532 03:20:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:26.532 03:20:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:26.532 03:20:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:27.914 03:20:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:27.914 03:20:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:27.914 03:20:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:27.914 03:20:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:27.914 03:20:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:27.914 03:20:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:27.914 03:20:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:27.914 03:20:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.914 03:20:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:27.914 03:20:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.914 03:20:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:27.914 03:20:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:27.914 "name": "raid_bdev1", 00:17:27.914 "uuid": "3d3b987b-80ea-43a7-a242-beee6ababf93", 00:17:27.914 "strip_size_kb": 64, 00:17:27.914 "state": "online", 00:17:27.914 "raid_level": "raid5f", 00:17:27.914 "superblock": false, 00:17:27.914 "num_base_bdevs": 4, 00:17:27.914 "num_base_bdevs_discovered": 4, 00:17:27.914 "num_base_bdevs_operational": 4, 00:17:27.914 "process": { 00:17:27.914 "type": "rebuild", 00:17:27.914 "target": "spare", 00:17:27.914 "progress": { 00:17:27.914 "blocks": 42240, 00:17:27.914 "percent": 21 00:17:27.914 } 00:17:27.914 }, 00:17:27.914 "base_bdevs_list": [ 00:17:27.914 { 00:17:27.914 "name": "spare", 00:17:27.914 "uuid": "89391823-c5cf-59c8-98bc-2e5037869cf8", 00:17:27.914 "is_configured": true, 00:17:27.914 "data_offset": 0, 00:17:27.914 "data_size": 65536 00:17:27.914 }, 00:17:27.914 { 00:17:27.914 "name": "BaseBdev2", 00:17:27.914 "uuid": "0267683a-ba8e-5a42-b9fb-def963e1262f", 00:17:27.914 "is_configured": true, 00:17:27.914 "data_offset": 0, 00:17:27.914 "data_size": 65536 00:17:27.914 }, 00:17:27.914 { 00:17:27.914 "name": "BaseBdev3", 00:17:27.914 "uuid": "d912c0bd-a41e-598a-bef9-5d07ee50c133", 00:17:27.914 "is_configured": true, 00:17:27.914 "data_offset": 0, 00:17:27.914 "data_size": 65536 00:17:27.914 }, 00:17:27.914 { 00:17:27.914 "name": "BaseBdev4", 00:17:27.914 "uuid": "d8061bdb-73c4-5585-96ab-26a4390c6c6c", 00:17:27.914 "is_configured": true, 00:17:27.914 "data_offset": 0, 00:17:27.914 "data_size": 65536 00:17:27.914 } 00:17:27.914 ] 00:17:27.914 }' 00:17:27.914 03:20:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:27.914 03:20:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:27.914 03:20:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:27.914 03:20:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:27.914 03:20:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:28.854 03:20:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:28.855 03:20:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:28.855 03:20:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:28.855 03:20:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:28.855 03:20:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:28.855 03:20:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:28.855 03:20:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.855 03:20:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.855 03:20:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.855 03:20:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.855 03:20:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.855 03:20:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:28.855 "name": "raid_bdev1", 00:17:28.855 "uuid": "3d3b987b-80ea-43a7-a242-beee6ababf93", 00:17:28.855 "strip_size_kb": 64, 00:17:28.855 "state": "online", 00:17:28.855 "raid_level": "raid5f", 00:17:28.855 "superblock": false, 00:17:28.855 "num_base_bdevs": 4, 00:17:28.855 "num_base_bdevs_discovered": 4, 00:17:28.855 "num_base_bdevs_operational": 4, 00:17:28.855 "process": { 00:17:28.855 "type": "rebuild", 00:17:28.855 "target": "spare", 00:17:28.855 "progress": { 00:17:28.855 "blocks": 65280, 00:17:28.855 "percent": 33 00:17:28.855 } 00:17:28.855 }, 00:17:28.855 "base_bdevs_list": [ 00:17:28.855 { 00:17:28.855 "name": "spare", 00:17:28.855 "uuid": "89391823-c5cf-59c8-98bc-2e5037869cf8", 00:17:28.855 "is_configured": true, 00:17:28.855 "data_offset": 0, 00:17:28.855 "data_size": 65536 00:17:28.855 }, 00:17:28.855 { 00:17:28.855 "name": "BaseBdev2", 00:17:28.855 "uuid": "0267683a-ba8e-5a42-b9fb-def963e1262f", 00:17:28.855 "is_configured": true, 00:17:28.855 "data_offset": 0, 00:17:28.855 "data_size": 65536 00:17:28.855 }, 00:17:28.855 { 00:17:28.855 "name": "BaseBdev3", 00:17:28.855 "uuid": "d912c0bd-a41e-598a-bef9-5d07ee50c133", 00:17:28.855 "is_configured": true, 00:17:28.855 "data_offset": 0, 00:17:28.855 "data_size": 65536 00:17:28.855 }, 00:17:28.855 { 00:17:28.855 "name": "BaseBdev4", 00:17:28.855 "uuid": "d8061bdb-73c4-5585-96ab-26a4390c6c6c", 00:17:28.855 "is_configured": true, 00:17:28.855 "data_offset": 0, 00:17:28.855 "data_size": 65536 00:17:28.855 } 00:17:28.855 ] 00:17:28.855 }' 00:17:28.855 03:20:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:28.855 03:20:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:28.855 03:20:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:28.855 03:20:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:28.855 03:20:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:30.238 03:20:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:30.238 03:20:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:30.238 03:20:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:30.238 03:20:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:30.238 03:20:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:30.238 03:20:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:30.238 03:20:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.238 03:20:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.238 03:20:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:30.238 03:20:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.238 03:20:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:30.238 03:20:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:30.238 "name": "raid_bdev1", 00:17:30.238 "uuid": "3d3b987b-80ea-43a7-a242-beee6ababf93", 00:17:30.238 "strip_size_kb": 64, 00:17:30.238 "state": "online", 00:17:30.238 "raid_level": "raid5f", 00:17:30.238 "superblock": false, 00:17:30.238 "num_base_bdevs": 4, 00:17:30.238 "num_base_bdevs_discovered": 4, 00:17:30.238 "num_base_bdevs_operational": 4, 00:17:30.238 "process": { 00:17:30.238 "type": "rebuild", 00:17:30.238 "target": "spare", 00:17:30.238 "progress": { 00:17:30.238 "blocks": 86400, 00:17:30.238 "percent": 43 00:17:30.238 } 00:17:30.238 }, 00:17:30.238 "base_bdevs_list": [ 00:17:30.238 { 00:17:30.238 "name": "spare", 00:17:30.238 "uuid": "89391823-c5cf-59c8-98bc-2e5037869cf8", 00:17:30.238 "is_configured": true, 00:17:30.238 "data_offset": 0, 00:17:30.238 "data_size": 65536 00:17:30.238 }, 00:17:30.238 { 00:17:30.238 "name": "BaseBdev2", 00:17:30.238 "uuid": "0267683a-ba8e-5a42-b9fb-def963e1262f", 00:17:30.238 "is_configured": true, 00:17:30.238 "data_offset": 0, 00:17:30.238 "data_size": 65536 00:17:30.238 }, 00:17:30.238 { 00:17:30.238 "name": "BaseBdev3", 00:17:30.238 "uuid": "d912c0bd-a41e-598a-bef9-5d07ee50c133", 00:17:30.238 "is_configured": true, 00:17:30.238 "data_offset": 0, 00:17:30.238 "data_size": 65536 00:17:30.238 }, 00:17:30.238 { 00:17:30.238 "name": "BaseBdev4", 00:17:30.238 "uuid": "d8061bdb-73c4-5585-96ab-26a4390c6c6c", 00:17:30.238 "is_configured": true, 00:17:30.238 "data_offset": 0, 00:17:30.238 "data_size": 65536 00:17:30.239 } 00:17:30.239 ] 00:17:30.239 }' 00:17:30.239 03:20:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:30.239 03:20:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:30.239 03:20:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:30.239 03:20:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:30.239 03:20:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:31.178 03:20:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:31.179 03:20:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:31.179 03:20:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.179 03:20:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:31.179 03:20:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:31.179 03:20:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.179 03:20:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.179 03:20:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.179 03:20:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.179 03:20:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.179 03:20:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.179 03:20:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.179 "name": "raid_bdev1", 00:17:31.179 "uuid": "3d3b987b-80ea-43a7-a242-beee6ababf93", 00:17:31.179 "strip_size_kb": 64, 00:17:31.179 "state": "online", 00:17:31.179 "raid_level": "raid5f", 00:17:31.179 "superblock": false, 00:17:31.179 "num_base_bdevs": 4, 00:17:31.179 "num_base_bdevs_discovered": 4, 00:17:31.179 "num_base_bdevs_operational": 4, 00:17:31.179 "process": { 00:17:31.179 "type": "rebuild", 00:17:31.179 "target": "spare", 00:17:31.179 "progress": { 00:17:31.179 "blocks": 109440, 00:17:31.179 "percent": 55 00:17:31.179 } 00:17:31.179 }, 00:17:31.179 "base_bdevs_list": [ 00:17:31.179 { 00:17:31.179 "name": "spare", 00:17:31.179 "uuid": "89391823-c5cf-59c8-98bc-2e5037869cf8", 00:17:31.179 "is_configured": true, 00:17:31.179 "data_offset": 0, 00:17:31.179 "data_size": 65536 00:17:31.179 }, 00:17:31.179 { 00:17:31.179 "name": "BaseBdev2", 00:17:31.179 "uuid": "0267683a-ba8e-5a42-b9fb-def963e1262f", 00:17:31.179 "is_configured": true, 00:17:31.179 "data_offset": 0, 00:17:31.179 "data_size": 65536 00:17:31.179 }, 00:17:31.179 { 00:17:31.179 "name": "BaseBdev3", 00:17:31.179 "uuid": "d912c0bd-a41e-598a-bef9-5d07ee50c133", 00:17:31.179 "is_configured": true, 00:17:31.179 "data_offset": 0, 00:17:31.179 "data_size": 65536 00:17:31.179 }, 00:17:31.179 { 00:17:31.179 "name": "BaseBdev4", 00:17:31.179 "uuid": "d8061bdb-73c4-5585-96ab-26a4390c6c6c", 00:17:31.179 "is_configured": true, 00:17:31.179 "data_offset": 0, 00:17:31.179 "data_size": 65536 00:17:31.179 } 00:17:31.179 ] 00:17:31.179 }' 00:17:31.179 03:20:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.179 03:20:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:31.179 03:20:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:31.179 03:20:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:31.179 03:20:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:32.560 03:20:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:32.560 03:20:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:32.560 03:20:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:32.560 03:20:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:32.560 03:20:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:32.560 03:20:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:32.560 03:20:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.560 03:20:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.560 03:20:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.560 03:20:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.560 03:20:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.560 03:20:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.560 "name": "raid_bdev1", 00:17:32.560 "uuid": "3d3b987b-80ea-43a7-a242-beee6ababf93", 00:17:32.560 "strip_size_kb": 64, 00:17:32.560 "state": "online", 00:17:32.560 "raid_level": "raid5f", 00:17:32.560 "superblock": false, 00:17:32.560 "num_base_bdevs": 4, 00:17:32.560 "num_base_bdevs_discovered": 4, 00:17:32.560 "num_base_bdevs_operational": 4, 00:17:32.560 "process": { 00:17:32.560 "type": "rebuild", 00:17:32.560 "target": "spare", 00:17:32.560 "progress": { 00:17:32.560 "blocks": 130560, 00:17:32.560 "percent": 66 00:17:32.560 } 00:17:32.560 }, 00:17:32.560 "base_bdevs_list": [ 00:17:32.560 { 00:17:32.560 "name": "spare", 00:17:32.560 "uuid": "89391823-c5cf-59c8-98bc-2e5037869cf8", 00:17:32.560 "is_configured": true, 00:17:32.560 "data_offset": 0, 00:17:32.560 "data_size": 65536 00:17:32.560 }, 00:17:32.560 { 00:17:32.560 "name": "BaseBdev2", 00:17:32.560 "uuid": "0267683a-ba8e-5a42-b9fb-def963e1262f", 00:17:32.560 "is_configured": true, 00:17:32.560 "data_offset": 0, 00:17:32.560 "data_size": 65536 00:17:32.560 }, 00:17:32.560 { 00:17:32.560 "name": "BaseBdev3", 00:17:32.560 "uuid": "d912c0bd-a41e-598a-bef9-5d07ee50c133", 00:17:32.560 "is_configured": true, 00:17:32.560 "data_offset": 0, 00:17:32.560 "data_size": 65536 00:17:32.560 }, 00:17:32.560 { 00:17:32.560 "name": "BaseBdev4", 00:17:32.560 "uuid": "d8061bdb-73c4-5585-96ab-26a4390c6c6c", 00:17:32.560 "is_configured": true, 00:17:32.560 "data_offset": 0, 00:17:32.560 "data_size": 65536 00:17:32.560 } 00:17:32.560 ] 00:17:32.560 }' 00:17:32.560 03:20:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.560 03:20:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:32.560 03:20:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.560 03:20:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:32.560 03:20:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:33.499 03:20:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:33.499 03:20:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:33.499 03:20:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:33.499 03:20:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:33.499 03:20:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:33.499 03:20:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:33.499 03:20:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:33.499 03:20:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:33.499 03:20:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:33.499 03:20:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:33.499 03:20:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:33.499 03:20:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:33.499 "name": "raid_bdev1", 00:17:33.499 "uuid": "3d3b987b-80ea-43a7-a242-beee6ababf93", 00:17:33.499 "strip_size_kb": 64, 00:17:33.499 "state": "online", 00:17:33.499 "raid_level": "raid5f", 00:17:33.499 "superblock": false, 00:17:33.499 "num_base_bdevs": 4, 00:17:33.499 "num_base_bdevs_discovered": 4, 00:17:33.499 "num_base_bdevs_operational": 4, 00:17:33.499 "process": { 00:17:33.499 "type": "rebuild", 00:17:33.499 "target": "spare", 00:17:33.499 "progress": { 00:17:33.499 "blocks": 151680, 00:17:33.499 "percent": 77 00:17:33.499 } 00:17:33.499 }, 00:17:33.499 "base_bdevs_list": [ 00:17:33.499 { 00:17:33.499 "name": "spare", 00:17:33.499 "uuid": "89391823-c5cf-59c8-98bc-2e5037869cf8", 00:17:33.499 "is_configured": true, 00:17:33.499 "data_offset": 0, 00:17:33.499 "data_size": 65536 00:17:33.499 }, 00:17:33.499 { 00:17:33.499 "name": "BaseBdev2", 00:17:33.499 "uuid": "0267683a-ba8e-5a42-b9fb-def963e1262f", 00:17:33.499 "is_configured": true, 00:17:33.499 "data_offset": 0, 00:17:33.499 "data_size": 65536 00:17:33.499 }, 00:17:33.499 { 00:17:33.499 "name": "BaseBdev3", 00:17:33.499 "uuid": "d912c0bd-a41e-598a-bef9-5d07ee50c133", 00:17:33.499 "is_configured": true, 00:17:33.500 "data_offset": 0, 00:17:33.500 "data_size": 65536 00:17:33.500 }, 00:17:33.500 { 00:17:33.500 "name": "BaseBdev4", 00:17:33.500 "uuid": "d8061bdb-73c4-5585-96ab-26a4390c6c6c", 00:17:33.500 "is_configured": true, 00:17:33.500 "data_offset": 0, 00:17:33.500 "data_size": 65536 00:17:33.500 } 00:17:33.500 ] 00:17:33.500 }' 00:17:33.500 03:20:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:33.500 03:20:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:33.500 03:20:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:33.500 03:20:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:33.500 03:20:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:34.440 03:20:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:34.440 03:20:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:34.440 03:20:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:34.440 03:20:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:34.440 03:20:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:34.440 03:20:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:34.440 03:20:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.440 03:20:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.440 03:20:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.440 03:20:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.440 03:20:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.440 03:20:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:34.440 "name": "raid_bdev1", 00:17:34.440 "uuid": "3d3b987b-80ea-43a7-a242-beee6ababf93", 00:17:34.440 "strip_size_kb": 64, 00:17:34.440 "state": "online", 00:17:34.440 "raid_level": "raid5f", 00:17:34.440 "superblock": false, 00:17:34.440 "num_base_bdevs": 4, 00:17:34.440 "num_base_bdevs_discovered": 4, 00:17:34.440 "num_base_bdevs_operational": 4, 00:17:34.440 "process": { 00:17:34.440 "type": "rebuild", 00:17:34.440 "target": "spare", 00:17:34.440 "progress": { 00:17:34.440 "blocks": 174720, 00:17:34.440 "percent": 88 00:17:34.440 } 00:17:34.440 }, 00:17:34.440 "base_bdevs_list": [ 00:17:34.440 { 00:17:34.440 "name": "spare", 00:17:34.440 "uuid": "89391823-c5cf-59c8-98bc-2e5037869cf8", 00:17:34.440 "is_configured": true, 00:17:34.440 "data_offset": 0, 00:17:34.440 "data_size": 65536 00:17:34.440 }, 00:17:34.440 { 00:17:34.440 "name": "BaseBdev2", 00:17:34.440 "uuid": "0267683a-ba8e-5a42-b9fb-def963e1262f", 00:17:34.440 "is_configured": true, 00:17:34.440 "data_offset": 0, 00:17:34.440 "data_size": 65536 00:17:34.440 }, 00:17:34.440 { 00:17:34.440 "name": "BaseBdev3", 00:17:34.440 "uuid": "d912c0bd-a41e-598a-bef9-5d07ee50c133", 00:17:34.440 "is_configured": true, 00:17:34.440 "data_offset": 0, 00:17:34.440 "data_size": 65536 00:17:34.440 }, 00:17:34.440 { 00:17:34.440 "name": "BaseBdev4", 00:17:34.440 "uuid": "d8061bdb-73c4-5585-96ab-26a4390c6c6c", 00:17:34.440 "is_configured": true, 00:17:34.440 "data_offset": 0, 00:17:34.440 "data_size": 65536 00:17:34.440 } 00:17:34.440 ] 00:17:34.440 }' 00:17:34.440 03:20:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:34.700 03:20:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:34.700 03:20:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:34.700 03:20:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:34.700 03:20:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:35.641 03:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:35.641 03:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:35.641 03:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.641 03:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:35.641 03:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:35.641 03:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.641 03:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.641 03:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.641 03:20:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.641 03:20:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.641 03:20:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.641 [2024-10-09 03:20:18.882790] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:35.641 [2024-10-09 03:20:18.882919] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:35.641 [2024-10-09 03:20:18.882987] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.641 03:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.641 "name": "raid_bdev1", 00:17:35.641 "uuid": "3d3b987b-80ea-43a7-a242-beee6ababf93", 00:17:35.641 "strip_size_kb": 64, 00:17:35.641 "state": "online", 00:17:35.641 "raid_level": "raid5f", 00:17:35.641 "superblock": false, 00:17:35.641 "num_base_bdevs": 4, 00:17:35.641 "num_base_bdevs_discovered": 4, 00:17:35.641 "num_base_bdevs_operational": 4, 00:17:35.641 "process": { 00:17:35.641 "type": "rebuild", 00:17:35.641 "target": "spare", 00:17:35.641 "progress": { 00:17:35.641 "blocks": 195840, 00:17:35.641 "percent": 99 00:17:35.641 } 00:17:35.641 }, 00:17:35.641 "base_bdevs_list": [ 00:17:35.641 { 00:17:35.641 "name": "spare", 00:17:35.641 "uuid": "89391823-c5cf-59c8-98bc-2e5037869cf8", 00:17:35.641 "is_configured": true, 00:17:35.641 "data_offset": 0, 00:17:35.641 "data_size": 65536 00:17:35.641 }, 00:17:35.641 { 00:17:35.641 "name": "BaseBdev2", 00:17:35.641 "uuid": "0267683a-ba8e-5a42-b9fb-def963e1262f", 00:17:35.641 "is_configured": true, 00:17:35.641 "data_offset": 0, 00:17:35.641 "data_size": 65536 00:17:35.641 }, 00:17:35.641 { 00:17:35.641 "name": "BaseBdev3", 00:17:35.641 "uuid": "d912c0bd-a41e-598a-bef9-5d07ee50c133", 00:17:35.641 "is_configured": true, 00:17:35.641 "data_offset": 0, 00:17:35.641 "data_size": 65536 00:17:35.641 }, 00:17:35.641 { 00:17:35.641 "name": "BaseBdev4", 00:17:35.641 "uuid": "d8061bdb-73c4-5585-96ab-26a4390c6c6c", 00:17:35.641 "is_configured": true, 00:17:35.641 "data_offset": 0, 00:17:35.641 "data_size": 65536 00:17:35.641 } 00:17:35.641 ] 00:17:35.641 }' 00:17:35.641 03:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.641 03:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:35.641 03:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.901 03:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:35.901 03:20:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:36.841 03:20:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:36.841 03:20:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:36.841 03:20:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.841 03:20:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:36.841 03:20:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:36.841 03:20:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.841 03:20:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.841 03:20:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.841 03:20:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.841 03:20:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.841 03:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:36.841 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.841 "name": "raid_bdev1", 00:17:36.841 "uuid": "3d3b987b-80ea-43a7-a242-beee6ababf93", 00:17:36.841 "strip_size_kb": 64, 00:17:36.841 "state": "online", 00:17:36.841 "raid_level": "raid5f", 00:17:36.841 "superblock": false, 00:17:36.841 "num_base_bdevs": 4, 00:17:36.841 "num_base_bdevs_discovered": 4, 00:17:36.841 "num_base_bdevs_operational": 4, 00:17:36.841 "base_bdevs_list": [ 00:17:36.841 { 00:17:36.841 "name": "spare", 00:17:36.841 "uuid": "89391823-c5cf-59c8-98bc-2e5037869cf8", 00:17:36.841 "is_configured": true, 00:17:36.841 "data_offset": 0, 00:17:36.841 "data_size": 65536 00:17:36.841 }, 00:17:36.841 { 00:17:36.841 "name": "BaseBdev2", 00:17:36.841 "uuid": "0267683a-ba8e-5a42-b9fb-def963e1262f", 00:17:36.841 "is_configured": true, 00:17:36.841 "data_offset": 0, 00:17:36.841 "data_size": 65536 00:17:36.841 }, 00:17:36.841 { 00:17:36.841 "name": "BaseBdev3", 00:17:36.841 "uuid": "d912c0bd-a41e-598a-bef9-5d07ee50c133", 00:17:36.841 "is_configured": true, 00:17:36.841 "data_offset": 0, 00:17:36.841 "data_size": 65536 00:17:36.841 }, 00:17:36.841 { 00:17:36.841 "name": "BaseBdev4", 00:17:36.841 "uuid": "d8061bdb-73c4-5585-96ab-26a4390c6c6c", 00:17:36.841 "is_configured": true, 00:17:36.841 "data_offset": 0, 00:17:36.841 "data_size": 65536 00:17:36.841 } 00:17:36.841 ] 00:17:36.841 }' 00:17:36.841 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.841 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:36.841 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.841 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:36.841 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:36.841 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:36.841 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.841 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:36.841 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:36.841 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.841 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.841 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.841 03:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:36.841 03:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.101 03:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.101 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.101 "name": "raid_bdev1", 00:17:37.101 "uuid": "3d3b987b-80ea-43a7-a242-beee6ababf93", 00:17:37.101 "strip_size_kb": 64, 00:17:37.101 "state": "online", 00:17:37.101 "raid_level": "raid5f", 00:17:37.101 "superblock": false, 00:17:37.101 "num_base_bdevs": 4, 00:17:37.101 "num_base_bdevs_discovered": 4, 00:17:37.101 "num_base_bdevs_operational": 4, 00:17:37.101 "base_bdevs_list": [ 00:17:37.101 { 00:17:37.101 "name": "spare", 00:17:37.101 "uuid": "89391823-c5cf-59c8-98bc-2e5037869cf8", 00:17:37.101 "is_configured": true, 00:17:37.101 "data_offset": 0, 00:17:37.101 "data_size": 65536 00:17:37.101 }, 00:17:37.101 { 00:17:37.101 "name": "BaseBdev2", 00:17:37.101 "uuid": "0267683a-ba8e-5a42-b9fb-def963e1262f", 00:17:37.101 "is_configured": true, 00:17:37.101 "data_offset": 0, 00:17:37.101 "data_size": 65536 00:17:37.101 }, 00:17:37.101 { 00:17:37.101 "name": "BaseBdev3", 00:17:37.101 "uuid": "d912c0bd-a41e-598a-bef9-5d07ee50c133", 00:17:37.101 "is_configured": true, 00:17:37.101 "data_offset": 0, 00:17:37.101 "data_size": 65536 00:17:37.101 }, 00:17:37.101 { 00:17:37.101 "name": "BaseBdev4", 00:17:37.101 "uuid": "d8061bdb-73c4-5585-96ab-26a4390c6c6c", 00:17:37.101 "is_configured": true, 00:17:37.101 "data_offset": 0, 00:17:37.101 "data_size": 65536 00:17:37.101 } 00:17:37.101 ] 00:17:37.101 }' 00:17:37.101 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.101 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:37.101 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:37.101 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:37.101 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:37.101 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:37.101 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:37.101 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:37.101 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:37.101 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:37.101 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:37.101 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:37.101 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:37.101 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:37.101 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.101 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.101 03:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.101 03:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.101 03:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.101 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:37.101 "name": "raid_bdev1", 00:17:37.101 "uuid": "3d3b987b-80ea-43a7-a242-beee6ababf93", 00:17:37.101 "strip_size_kb": 64, 00:17:37.101 "state": "online", 00:17:37.101 "raid_level": "raid5f", 00:17:37.101 "superblock": false, 00:17:37.101 "num_base_bdevs": 4, 00:17:37.101 "num_base_bdevs_discovered": 4, 00:17:37.101 "num_base_bdevs_operational": 4, 00:17:37.101 "base_bdevs_list": [ 00:17:37.101 { 00:17:37.101 "name": "spare", 00:17:37.101 "uuid": "89391823-c5cf-59c8-98bc-2e5037869cf8", 00:17:37.101 "is_configured": true, 00:17:37.101 "data_offset": 0, 00:17:37.101 "data_size": 65536 00:17:37.101 }, 00:17:37.101 { 00:17:37.101 "name": "BaseBdev2", 00:17:37.101 "uuid": "0267683a-ba8e-5a42-b9fb-def963e1262f", 00:17:37.101 "is_configured": true, 00:17:37.101 "data_offset": 0, 00:17:37.101 "data_size": 65536 00:17:37.101 }, 00:17:37.101 { 00:17:37.101 "name": "BaseBdev3", 00:17:37.101 "uuid": "d912c0bd-a41e-598a-bef9-5d07ee50c133", 00:17:37.101 "is_configured": true, 00:17:37.101 "data_offset": 0, 00:17:37.101 "data_size": 65536 00:17:37.101 }, 00:17:37.101 { 00:17:37.101 "name": "BaseBdev4", 00:17:37.101 "uuid": "d8061bdb-73c4-5585-96ab-26a4390c6c6c", 00:17:37.101 "is_configured": true, 00:17:37.101 "data_offset": 0, 00:17:37.101 "data_size": 65536 00:17:37.101 } 00:17:37.101 ] 00:17:37.101 }' 00:17:37.101 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:37.101 03:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.672 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:37.672 03:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.672 03:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.672 [2024-10-09 03:20:20.674059] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:37.672 [2024-10-09 03:20:20.674091] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:37.672 [2024-10-09 03:20:20.674173] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:37.672 [2024-10-09 03:20:20.674258] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:37.672 [2024-10-09 03:20:20.674268] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:37.672 03:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.672 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.672 03:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.672 03:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.672 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:37.672 03:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.672 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:37.672 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:37.672 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:37.672 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:37.672 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:37.672 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:37.672 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:37.672 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:37.672 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:37.672 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:37.672 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:37.672 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:37.672 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:37.672 /dev/nbd0 00:17:37.672 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:37.672 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:37.672 03:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:37.672 03:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:17:37.672 03:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:37.672 03:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:37.672 03:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:37.672 03:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:17:37.672 03:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:37.672 03:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:37.672 03:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:37.672 1+0 records in 00:17:37.672 1+0 records out 00:17:37.672 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000538889 s, 7.6 MB/s 00:17:37.672 03:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:37.933 03:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:17:37.933 03:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:37.933 03:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:37.933 03:20:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:17:37.933 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:37.933 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:37.933 03:20:20 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:37.933 /dev/nbd1 00:17:37.933 03:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:37.933 03:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:37.933 03:20:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:37.933 03:20:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@869 -- # local i 00:17:37.933 03:20:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:37.933 03:20:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:37.933 03:20:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:37.933 03:20:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # break 00:17:37.933 03:20:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:37.933 03:20:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:37.933 03:20:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:37.933 1+0 records in 00:17:37.933 1+0 records out 00:17:37.933 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485769 s, 8.4 MB/s 00:17:37.933 03:20:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:37.933 03:20:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@886 -- # size=4096 00:17:37.933 03:20:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:37.933 03:20:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:37.933 03:20:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # return 0 00:17:37.933 03:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:37.933 03:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:37.933 03:20:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:38.193 03:20:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:38.193 03:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:38.193 03:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:38.193 03:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:38.193 03:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:38.193 03:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:38.193 03:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:38.453 03:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:38.453 03:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:38.453 03:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:38.453 03:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:38.453 03:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:38.453 03:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:38.453 03:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:38.453 03:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:38.453 03:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:38.453 03:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:38.714 03:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:38.714 03:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:38.714 03:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:38.714 03:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:38.714 03:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:38.714 03:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:38.714 03:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:38.714 03:20:21 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:38.714 03:20:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:38.714 03:20:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84878 00:17:38.714 03:20:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@950 -- # '[' -z 84878 ']' 00:17:38.714 03:20:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # kill -0 84878 00:17:38.714 03:20:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # uname 00:17:38.714 03:20:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:38.714 03:20:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 84878 00:17:38.714 03:20:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:38.714 killing process with pid 84878 00:17:38.714 Received shutdown signal, test time was about 60.000000 seconds 00:17:38.714 00:17:38.714 Latency(us) 00:17:38.714 [2024-10-09T03:20:22.017Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:38.714 [2024-10-09T03:20:22.017Z] =================================================================================================================== 00:17:38.714 [2024-10-09T03:20:22.017Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:38.714 03:20:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:38.714 03:20:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@968 -- # echo 'killing process with pid 84878' 00:17:38.714 03:20:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@969 -- # kill 84878 00:17:38.714 [2024-10-09 03:20:21.880773] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:38.714 03:20:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@974 -- # wait 84878 00:17:39.284 [2024-10-09 03:20:22.383166] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:40.666 03:20:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:40.666 00:17:40.666 real 0m20.196s 00:17:40.666 user 0m23.859s 00:17:40.666 sys 0m2.402s 00:17:40.666 03:20:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:40.666 ************************************ 00:17:40.666 END TEST raid5f_rebuild_test 00:17:40.666 ************************************ 00:17:40.666 03:20:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.666 03:20:23 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:17:40.666 03:20:23 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:17:40.666 03:20:23 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:40.666 03:20:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:40.666 ************************************ 00:17:40.666 START TEST raid5f_rebuild_test_sb 00:17:40.666 ************************************ 00:17:40.666 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid5f 4 true false true 00:17:40.666 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:40.666 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:17:40.666 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:40.666 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:40.667 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:40.667 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:40.667 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:40.667 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:40.667 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:40.667 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:40.667 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:40.667 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:40.667 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:40.667 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:40.667 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:40.667 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:40.667 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:17:40.667 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:40.667 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:40.667 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:17:40.667 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:40.667 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:40.667 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:40.667 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:40.667 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:40.667 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:40.667 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:40.667 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:40.667 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:40.667 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:40.667 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:40.667 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:40.667 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85394 00:17:40.667 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:40.667 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85394 00:17:40.667 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@831 -- # '[' -z 85394 ']' 00:17:40.667 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:40.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:40.667 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:40.667 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:40.667 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:40.667 03:20:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:40.667 [2024-10-09 03:20:23.867979] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:17:40.667 [2024-10-09 03:20:23.868185] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:17:40.667 Zero copy mechanism will not be used. 00:17:40.667 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85394 ] 00:17:40.927 [2024-10-09 03:20:24.037780] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.187 [2024-10-09 03:20:24.279020] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:17:41.447 [2024-10-09 03:20:24.512163] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:41.447 [2024-10-09 03:20:24.512285] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:41.447 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:41.447 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # return 0 00:17:41.447 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:41.447 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:41.447 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.447 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.447 BaseBdev1_malloc 00:17:41.447 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.447 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:41.447 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.447 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.447 [2024-10-09 03:20:24.745200] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:41.447 [2024-10-09 03:20:24.745270] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.447 [2024-10-09 03:20:24.745297] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:41.447 [2024-10-09 03:20:24.745312] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.447 [2024-10-09 03:20:24.747619] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.447 [2024-10-09 03:20:24.747658] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:41.708 BaseBdev1 00:17:41.708 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.708 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:41.708 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:41.708 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.708 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.708 BaseBdev2_malloc 00:17:41.708 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.708 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:41.708 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.708 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.708 [2024-10-09 03:20:24.831285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:41.708 [2024-10-09 03:20:24.831383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.708 [2024-10-09 03:20:24.831408] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:41.708 [2024-10-09 03:20:24.831419] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.708 [2024-10-09 03:20:24.833709] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.708 [2024-10-09 03:20:24.833746] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:41.708 BaseBdev2 00:17:41.708 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.708 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:41.708 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:41.708 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.708 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.708 BaseBdev3_malloc 00:17:41.708 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.708 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:41.708 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.708 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.708 [2024-10-09 03:20:24.886963] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:41.708 [2024-10-09 03:20:24.887010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.708 [2024-10-09 03:20:24.887030] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:41.708 [2024-10-09 03:20:24.887041] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.708 [2024-10-09 03:20:24.889289] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.708 [2024-10-09 03:20:24.889328] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:41.708 BaseBdev3 00:17:41.708 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.708 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:41.708 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:17:41.708 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.708 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.708 BaseBdev4_malloc 00:17:41.708 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.708 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:17:41.708 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.708 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.708 [2024-10-09 03:20:24.942637] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:17:41.708 [2024-10-09 03:20:24.942735] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.708 [2024-10-09 03:20:24.942761] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:41.708 [2024-10-09 03:20:24.942772] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.708 [2024-10-09 03:20:24.945042] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.708 [2024-10-09 03:20:24.945081] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:17:41.708 BaseBdev4 00:17:41.708 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.708 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:41.708 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.708 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.708 spare_malloc 00:17:41.708 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.708 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:41.708 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.708 03:20:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.708 spare_delay 00:17:41.708 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.708 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:41.708 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.708 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.969 [2024-10-09 03:20:25.010165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:41.969 [2024-10-09 03:20:25.010218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.969 [2024-10-09 03:20:25.010236] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:17:41.969 [2024-10-09 03:20:25.010247] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.969 [2024-10-09 03:20:25.012459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.969 [2024-10-09 03:20:25.012494] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:41.969 spare 00:17:41.969 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.969 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:17:41.969 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.969 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.969 [2024-10-09 03:20:25.022220] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:41.969 [2024-10-09 03:20:25.024138] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:41.969 [2024-10-09 03:20:25.024254] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:41.969 [2024-10-09 03:20:25.024309] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:41.969 [2024-10-09 03:20:25.024496] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:41.969 [2024-10-09 03:20:25.024509] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:41.969 [2024-10-09 03:20:25.024761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:41.969 [2024-10-09 03:20:25.031351] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:41.969 [2024-10-09 03:20:25.031413] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:41.969 [2024-10-09 03:20:25.031592] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:41.969 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.969 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:41.969 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:41.969 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:41.969 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:41.969 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:41.969 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:41.969 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:41.969 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:41.969 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:41.969 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:41.969 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:41.969 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:41.969 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:41.969 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.969 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:41.969 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:41.969 "name": "raid_bdev1", 00:17:41.969 "uuid": "d3f26f9e-acd6-4780-a825-683a8973d0d3", 00:17:41.969 "strip_size_kb": 64, 00:17:41.969 "state": "online", 00:17:41.969 "raid_level": "raid5f", 00:17:41.969 "superblock": true, 00:17:41.969 "num_base_bdevs": 4, 00:17:41.969 "num_base_bdevs_discovered": 4, 00:17:41.969 "num_base_bdevs_operational": 4, 00:17:41.969 "base_bdevs_list": [ 00:17:41.969 { 00:17:41.969 "name": "BaseBdev1", 00:17:41.969 "uuid": "60115fe5-eba6-52bf-bde4-923341de0e4e", 00:17:41.969 "is_configured": true, 00:17:41.969 "data_offset": 2048, 00:17:41.969 "data_size": 63488 00:17:41.969 }, 00:17:41.969 { 00:17:41.969 "name": "BaseBdev2", 00:17:41.969 "uuid": "f75d8e1e-c0d9-57ef-ab27-07052f99cbb2", 00:17:41.969 "is_configured": true, 00:17:41.969 "data_offset": 2048, 00:17:41.969 "data_size": 63488 00:17:41.969 }, 00:17:41.969 { 00:17:41.969 "name": "BaseBdev3", 00:17:41.969 "uuid": "4a2364de-5868-5151-b5bd-4449120579ef", 00:17:41.969 "is_configured": true, 00:17:41.969 "data_offset": 2048, 00:17:41.969 "data_size": 63488 00:17:41.969 }, 00:17:41.969 { 00:17:41.969 "name": "BaseBdev4", 00:17:41.969 "uuid": "3bde819e-516e-580e-9852-2980503c2db6", 00:17:41.969 "is_configured": true, 00:17:41.969 "data_offset": 2048, 00:17:41.969 "data_size": 63488 00:17:41.969 } 00:17:41.969 ] 00:17:41.969 }' 00:17:41.969 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:41.969 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.229 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:42.229 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:42.229 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.229 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.229 [2024-10-09 03:20:25.459804] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:42.229 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.229 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:17:42.229 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:42.229 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:42.229 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.229 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:42.229 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.489 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:42.489 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:42.489 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:42.489 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:42.489 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:42.489 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:42.489 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:42.489 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:42.489 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:42.489 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:42.489 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:42.489 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:42.489 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:42.489 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:42.489 [2024-10-09 03:20:25.731191] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:17:42.489 /dev/nbd0 00:17:42.489 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:42.489 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:42.489 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:42.489 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:17:42.489 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:42.489 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:42.489 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:42.748 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:17:42.748 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:42.748 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:42.748 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:42.748 1+0 records in 00:17:42.748 1+0 records out 00:17:42.748 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000410287 s, 10.0 MB/s 00:17:42.748 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:42.748 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:17:42.748 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:42.748 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:42.748 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:17:42.748 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:42.748 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:42.748 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:42.748 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:17:42.748 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:17:42.748 03:20:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:17:43.316 496+0 records in 00:17:43.316 496+0 records out 00:17:43.316 97517568 bytes (98 MB, 93 MiB) copied, 0.629989 s, 155 MB/s 00:17:43.316 03:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:43.316 03:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:43.316 03:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:43.316 03:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:43.316 03:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:43.316 03:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:43.316 03:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:43.576 03:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:43.576 [2024-10-09 03:20:26.632043] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:43.576 03:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:43.576 03:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:43.576 03:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:43.576 03:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:43.576 03:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:43.576 03:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:43.576 03:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:43.576 03:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:43.576 03:20:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.576 03:20:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.576 [2024-10-09 03:20:26.649111] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:43.576 03:20:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.576 03:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:43.576 03:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:43.576 03:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:43.576 03:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:43.576 03:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:43.576 03:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:43.576 03:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:43.576 03:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:43.576 03:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:43.576 03:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:43.576 03:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:43.576 03:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:43.576 03:20:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.576 03:20:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.576 03:20:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.576 03:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:43.576 "name": "raid_bdev1", 00:17:43.576 "uuid": "d3f26f9e-acd6-4780-a825-683a8973d0d3", 00:17:43.576 "strip_size_kb": 64, 00:17:43.576 "state": "online", 00:17:43.576 "raid_level": "raid5f", 00:17:43.576 "superblock": true, 00:17:43.576 "num_base_bdevs": 4, 00:17:43.576 "num_base_bdevs_discovered": 3, 00:17:43.576 "num_base_bdevs_operational": 3, 00:17:43.576 "base_bdevs_list": [ 00:17:43.576 { 00:17:43.576 "name": null, 00:17:43.576 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:43.576 "is_configured": false, 00:17:43.576 "data_offset": 0, 00:17:43.576 "data_size": 63488 00:17:43.576 }, 00:17:43.576 { 00:17:43.576 "name": "BaseBdev2", 00:17:43.576 "uuid": "f75d8e1e-c0d9-57ef-ab27-07052f99cbb2", 00:17:43.576 "is_configured": true, 00:17:43.576 "data_offset": 2048, 00:17:43.576 "data_size": 63488 00:17:43.576 }, 00:17:43.576 { 00:17:43.576 "name": "BaseBdev3", 00:17:43.576 "uuid": "4a2364de-5868-5151-b5bd-4449120579ef", 00:17:43.576 "is_configured": true, 00:17:43.576 "data_offset": 2048, 00:17:43.576 "data_size": 63488 00:17:43.576 }, 00:17:43.576 { 00:17:43.576 "name": "BaseBdev4", 00:17:43.576 "uuid": "3bde819e-516e-580e-9852-2980503c2db6", 00:17:43.576 "is_configured": true, 00:17:43.576 "data_offset": 2048, 00:17:43.576 "data_size": 63488 00:17:43.576 } 00:17:43.576 ] 00:17:43.576 }' 00:17:43.576 03:20:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:43.576 03:20:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.836 03:20:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:43.836 03:20:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.836 03:20:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.836 [2024-10-09 03:20:27.032819] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:43.836 [2024-10-09 03:20:27.046400] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:17:43.836 03:20:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.836 03:20:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:43.836 [2024-10-09 03:20:27.055230] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:44.776 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:44.776 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:44.776 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:44.776 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:44.776 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:44.776 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.776 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.776 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:44.776 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.036 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.036 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.036 "name": "raid_bdev1", 00:17:45.036 "uuid": "d3f26f9e-acd6-4780-a825-683a8973d0d3", 00:17:45.036 "strip_size_kb": 64, 00:17:45.036 "state": "online", 00:17:45.036 "raid_level": "raid5f", 00:17:45.036 "superblock": true, 00:17:45.036 "num_base_bdevs": 4, 00:17:45.036 "num_base_bdevs_discovered": 4, 00:17:45.036 "num_base_bdevs_operational": 4, 00:17:45.036 "process": { 00:17:45.036 "type": "rebuild", 00:17:45.036 "target": "spare", 00:17:45.036 "progress": { 00:17:45.036 "blocks": 19200, 00:17:45.036 "percent": 10 00:17:45.036 } 00:17:45.036 }, 00:17:45.036 "base_bdevs_list": [ 00:17:45.036 { 00:17:45.036 "name": "spare", 00:17:45.036 "uuid": "835a10ba-4a95-51f2-8ff1-036c5c87119e", 00:17:45.036 "is_configured": true, 00:17:45.036 "data_offset": 2048, 00:17:45.036 "data_size": 63488 00:17:45.036 }, 00:17:45.036 { 00:17:45.036 "name": "BaseBdev2", 00:17:45.036 "uuid": "f75d8e1e-c0d9-57ef-ab27-07052f99cbb2", 00:17:45.036 "is_configured": true, 00:17:45.036 "data_offset": 2048, 00:17:45.036 "data_size": 63488 00:17:45.036 }, 00:17:45.036 { 00:17:45.036 "name": "BaseBdev3", 00:17:45.036 "uuid": "4a2364de-5868-5151-b5bd-4449120579ef", 00:17:45.036 "is_configured": true, 00:17:45.036 "data_offset": 2048, 00:17:45.036 "data_size": 63488 00:17:45.036 }, 00:17:45.036 { 00:17:45.036 "name": "BaseBdev4", 00:17:45.036 "uuid": "3bde819e-516e-580e-9852-2980503c2db6", 00:17:45.036 "is_configured": true, 00:17:45.036 "data_offset": 2048, 00:17:45.036 "data_size": 63488 00:17:45.036 } 00:17:45.036 ] 00:17:45.036 }' 00:17:45.036 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.036 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:45.036 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.036 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:45.036 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:45.036 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.036 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.036 [2024-10-09 03:20:28.206165] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:45.036 [2024-10-09 03:20:28.261945] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:45.036 [2024-10-09 03:20:28.262017] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:45.036 [2024-10-09 03:20:28.262034] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:45.036 [2024-10-09 03:20:28.262044] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:45.036 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.036 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:45.036 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:45.036 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:45.036 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:45.036 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:45.036 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:45.036 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:45.036 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:45.036 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:45.036 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:45.036 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.036 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.036 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.036 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.036 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.296 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.296 "name": "raid_bdev1", 00:17:45.296 "uuid": "d3f26f9e-acd6-4780-a825-683a8973d0d3", 00:17:45.296 "strip_size_kb": 64, 00:17:45.296 "state": "online", 00:17:45.296 "raid_level": "raid5f", 00:17:45.296 "superblock": true, 00:17:45.296 "num_base_bdevs": 4, 00:17:45.296 "num_base_bdevs_discovered": 3, 00:17:45.296 "num_base_bdevs_operational": 3, 00:17:45.296 "base_bdevs_list": [ 00:17:45.296 { 00:17:45.296 "name": null, 00:17:45.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.296 "is_configured": false, 00:17:45.296 "data_offset": 0, 00:17:45.296 "data_size": 63488 00:17:45.296 }, 00:17:45.296 { 00:17:45.296 "name": "BaseBdev2", 00:17:45.296 "uuid": "f75d8e1e-c0d9-57ef-ab27-07052f99cbb2", 00:17:45.296 "is_configured": true, 00:17:45.296 "data_offset": 2048, 00:17:45.296 "data_size": 63488 00:17:45.296 }, 00:17:45.296 { 00:17:45.296 "name": "BaseBdev3", 00:17:45.296 "uuid": "4a2364de-5868-5151-b5bd-4449120579ef", 00:17:45.296 "is_configured": true, 00:17:45.296 "data_offset": 2048, 00:17:45.296 "data_size": 63488 00:17:45.296 }, 00:17:45.296 { 00:17:45.296 "name": "BaseBdev4", 00:17:45.296 "uuid": "3bde819e-516e-580e-9852-2980503c2db6", 00:17:45.296 "is_configured": true, 00:17:45.296 "data_offset": 2048, 00:17:45.296 "data_size": 63488 00:17:45.296 } 00:17:45.296 ] 00:17:45.296 }' 00:17:45.296 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.296 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.555 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:45.555 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:45.555 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:45.555 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:45.555 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:45.555 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.555 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.555 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.555 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.555 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.555 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:45.555 "name": "raid_bdev1", 00:17:45.555 "uuid": "d3f26f9e-acd6-4780-a825-683a8973d0d3", 00:17:45.555 "strip_size_kb": 64, 00:17:45.555 "state": "online", 00:17:45.555 "raid_level": "raid5f", 00:17:45.555 "superblock": true, 00:17:45.555 "num_base_bdevs": 4, 00:17:45.555 "num_base_bdevs_discovered": 3, 00:17:45.555 "num_base_bdevs_operational": 3, 00:17:45.555 "base_bdevs_list": [ 00:17:45.555 { 00:17:45.555 "name": null, 00:17:45.555 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:45.555 "is_configured": false, 00:17:45.555 "data_offset": 0, 00:17:45.555 "data_size": 63488 00:17:45.555 }, 00:17:45.555 { 00:17:45.555 "name": "BaseBdev2", 00:17:45.555 "uuid": "f75d8e1e-c0d9-57ef-ab27-07052f99cbb2", 00:17:45.555 "is_configured": true, 00:17:45.555 "data_offset": 2048, 00:17:45.555 "data_size": 63488 00:17:45.555 }, 00:17:45.555 { 00:17:45.555 "name": "BaseBdev3", 00:17:45.556 "uuid": "4a2364de-5868-5151-b5bd-4449120579ef", 00:17:45.556 "is_configured": true, 00:17:45.556 "data_offset": 2048, 00:17:45.556 "data_size": 63488 00:17:45.556 }, 00:17:45.556 { 00:17:45.556 "name": "BaseBdev4", 00:17:45.556 "uuid": "3bde819e-516e-580e-9852-2980503c2db6", 00:17:45.556 "is_configured": true, 00:17:45.556 "data_offset": 2048, 00:17:45.556 "data_size": 63488 00:17:45.556 } 00:17:45.556 ] 00:17:45.556 }' 00:17:45.556 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:45.556 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:45.556 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:45.815 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:45.815 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:45.815 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.815 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.815 [2024-10-09 03:20:28.891551] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:45.815 [2024-10-09 03:20:28.904801] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:17:45.815 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.815 03:20:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:45.815 [2024-10-09 03:20:28.913761] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:46.754 03:20:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:46.754 03:20:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.754 03:20:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:46.754 03:20:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:46.754 03:20:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.754 03:20:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.754 03:20:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.754 03:20:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.754 03:20:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.754 03:20:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.754 03:20:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:46.754 "name": "raid_bdev1", 00:17:46.754 "uuid": "d3f26f9e-acd6-4780-a825-683a8973d0d3", 00:17:46.754 "strip_size_kb": 64, 00:17:46.754 "state": "online", 00:17:46.754 "raid_level": "raid5f", 00:17:46.754 "superblock": true, 00:17:46.754 "num_base_bdevs": 4, 00:17:46.754 "num_base_bdevs_discovered": 4, 00:17:46.754 "num_base_bdevs_operational": 4, 00:17:46.754 "process": { 00:17:46.754 "type": "rebuild", 00:17:46.754 "target": "spare", 00:17:46.754 "progress": { 00:17:46.754 "blocks": 19200, 00:17:46.754 "percent": 10 00:17:46.754 } 00:17:46.754 }, 00:17:46.754 "base_bdevs_list": [ 00:17:46.754 { 00:17:46.754 "name": "spare", 00:17:46.754 "uuid": "835a10ba-4a95-51f2-8ff1-036c5c87119e", 00:17:46.754 "is_configured": true, 00:17:46.754 "data_offset": 2048, 00:17:46.754 "data_size": 63488 00:17:46.754 }, 00:17:46.754 { 00:17:46.754 "name": "BaseBdev2", 00:17:46.754 "uuid": "f75d8e1e-c0d9-57ef-ab27-07052f99cbb2", 00:17:46.754 "is_configured": true, 00:17:46.754 "data_offset": 2048, 00:17:46.754 "data_size": 63488 00:17:46.754 }, 00:17:46.754 { 00:17:46.754 "name": "BaseBdev3", 00:17:46.754 "uuid": "4a2364de-5868-5151-b5bd-4449120579ef", 00:17:46.754 "is_configured": true, 00:17:46.754 "data_offset": 2048, 00:17:46.754 "data_size": 63488 00:17:46.754 }, 00:17:46.754 { 00:17:46.754 "name": "BaseBdev4", 00:17:46.754 "uuid": "3bde819e-516e-580e-9852-2980503c2db6", 00:17:46.754 "is_configured": true, 00:17:46.754 "data_offset": 2048, 00:17:46.754 "data_size": 63488 00:17:46.754 } 00:17:46.754 ] 00:17:46.754 }' 00:17:46.754 03:20:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:46.754 03:20:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:46.754 03:20:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:46.754 03:20:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:46.754 03:20:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:46.754 03:20:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:46.754 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:46.754 03:20:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:17:46.754 03:20:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:46.754 03:20:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=657 00:17:46.754 03:20:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:46.754 03:20:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:46.754 03:20:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:46.754 03:20:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:46.754 03:20:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:46.754 03:20:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:46.754 03:20:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.754 03:20:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.754 03:20:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.754 03:20:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.014 03:20:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.014 03:20:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.014 "name": "raid_bdev1", 00:17:47.014 "uuid": "d3f26f9e-acd6-4780-a825-683a8973d0d3", 00:17:47.014 "strip_size_kb": 64, 00:17:47.014 "state": "online", 00:17:47.014 "raid_level": "raid5f", 00:17:47.014 "superblock": true, 00:17:47.014 "num_base_bdevs": 4, 00:17:47.014 "num_base_bdevs_discovered": 4, 00:17:47.014 "num_base_bdevs_operational": 4, 00:17:47.014 "process": { 00:17:47.014 "type": "rebuild", 00:17:47.014 "target": "spare", 00:17:47.014 "progress": { 00:17:47.014 "blocks": 21120, 00:17:47.014 "percent": 11 00:17:47.014 } 00:17:47.014 }, 00:17:47.014 "base_bdevs_list": [ 00:17:47.014 { 00:17:47.014 "name": "spare", 00:17:47.014 "uuid": "835a10ba-4a95-51f2-8ff1-036c5c87119e", 00:17:47.014 "is_configured": true, 00:17:47.014 "data_offset": 2048, 00:17:47.014 "data_size": 63488 00:17:47.014 }, 00:17:47.014 { 00:17:47.014 "name": "BaseBdev2", 00:17:47.014 "uuid": "f75d8e1e-c0d9-57ef-ab27-07052f99cbb2", 00:17:47.014 "is_configured": true, 00:17:47.014 "data_offset": 2048, 00:17:47.014 "data_size": 63488 00:17:47.014 }, 00:17:47.014 { 00:17:47.014 "name": "BaseBdev3", 00:17:47.014 "uuid": "4a2364de-5868-5151-b5bd-4449120579ef", 00:17:47.014 "is_configured": true, 00:17:47.014 "data_offset": 2048, 00:17:47.014 "data_size": 63488 00:17:47.014 }, 00:17:47.014 { 00:17:47.014 "name": "BaseBdev4", 00:17:47.014 "uuid": "3bde819e-516e-580e-9852-2980503c2db6", 00:17:47.014 "is_configured": true, 00:17:47.014 "data_offset": 2048, 00:17:47.014 "data_size": 63488 00:17:47.014 } 00:17:47.014 ] 00:17:47.014 }' 00:17:47.014 03:20:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:47.014 03:20:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:47.014 03:20:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:47.014 03:20:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:47.014 03:20:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:47.953 03:20:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:47.953 03:20:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:47.953 03:20:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:47.953 03:20:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:47.953 03:20:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:47.953 03:20:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:47.953 03:20:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:47.953 03:20:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.953 03:20:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.953 03:20:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.953 03:20:31 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.953 03:20:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:47.953 "name": "raid_bdev1", 00:17:47.953 "uuid": "d3f26f9e-acd6-4780-a825-683a8973d0d3", 00:17:47.953 "strip_size_kb": 64, 00:17:47.953 "state": "online", 00:17:47.953 "raid_level": "raid5f", 00:17:47.953 "superblock": true, 00:17:47.953 "num_base_bdevs": 4, 00:17:47.953 "num_base_bdevs_discovered": 4, 00:17:47.953 "num_base_bdevs_operational": 4, 00:17:47.953 "process": { 00:17:47.953 "type": "rebuild", 00:17:47.953 "target": "spare", 00:17:47.953 "progress": { 00:17:47.953 "blocks": 42240, 00:17:47.953 "percent": 22 00:17:47.953 } 00:17:47.953 }, 00:17:47.953 "base_bdevs_list": [ 00:17:47.953 { 00:17:47.953 "name": "spare", 00:17:47.953 "uuid": "835a10ba-4a95-51f2-8ff1-036c5c87119e", 00:17:47.953 "is_configured": true, 00:17:47.953 "data_offset": 2048, 00:17:47.953 "data_size": 63488 00:17:47.953 }, 00:17:47.953 { 00:17:47.953 "name": "BaseBdev2", 00:17:47.953 "uuid": "f75d8e1e-c0d9-57ef-ab27-07052f99cbb2", 00:17:47.953 "is_configured": true, 00:17:47.953 "data_offset": 2048, 00:17:47.953 "data_size": 63488 00:17:47.953 }, 00:17:47.953 { 00:17:47.953 "name": "BaseBdev3", 00:17:47.953 "uuid": "4a2364de-5868-5151-b5bd-4449120579ef", 00:17:47.953 "is_configured": true, 00:17:47.953 "data_offset": 2048, 00:17:47.953 "data_size": 63488 00:17:47.953 }, 00:17:47.953 { 00:17:47.953 "name": "BaseBdev4", 00:17:47.953 "uuid": "3bde819e-516e-580e-9852-2980503c2db6", 00:17:47.953 "is_configured": true, 00:17:47.953 "data_offset": 2048, 00:17:47.953 "data_size": 63488 00:17:47.953 } 00:17:47.953 ] 00:17:47.953 }' 00:17:47.953 03:20:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.213 03:20:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:48.213 03:20:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.213 03:20:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:48.213 03:20:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:49.153 03:20:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:49.153 03:20:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:49.153 03:20:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.153 03:20:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:49.153 03:20:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:49.153 03:20:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.153 03:20:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.153 03:20:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.153 03:20:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:49.153 03:20:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.153 03:20:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:49.153 03:20:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:49.153 "name": "raid_bdev1", 00:17:49.153 "uuid": "d3f26f9e-acd6-4780-a825-683a8973d0d3", 00:17:49.153 "strip_size_kb": 64, 00:17:49.153 "state": "online", 00:17:49.153 "raid_level": "raid5f", 00:17:49.153 "superblock": true, 00:17:49.153 "num_base_bdevs": 4, 00:17:49.153 "num_base_bdevs_discovered": 4, 00:17:49.153 "num_base_bdevs_operational": 4, 00:17:49.153 "process": { 00:17:49.153 "type": "rebuild", 00:17:49.153 "target": "spare", 00:17:49.153 "progress": { 00:17:49.153 "blocks": 65280, 00:17:49.153 "percent": 34 00:17:49.153 } 00:17:49.153 }, 00:17:49.153 "base_bdevs_list": [ 00:17:49.153 { 00:17:49.153 "name": "spare", 00:17:49.153 "uuid": "835a10ba-4a95-51f2-8ff1-036c5c87119e", 00:17:49.153 "is_configured": true, 00:17:49.153 "data_offset": 2048, 00:17:49.153 "data_size": 63488 00:17:49.153 }, 00:17:49.153 { 00:17:49.153 "name": "BaseBdev2", 00:17:49.153 "uuid": "f75d8e1e-c0d9-57ef-ab27-07052f99cbb2", 00:17:49.153 "is_configured": true, 00:17:49.153 "data_offset": 2048, 00:17:49.153 "data_size": 63488 00:17:49.154 }, 00:17:49.154 { 00:17:49.154 "name": "BaseBdev3", 00:17:49.154 "uuid": "4a2364de-5868-5151-b5bd-4449120579ef", 00:17:49.154 "is_configured": true, 00:17:49.154 "data_offset": 2048, 00:17:49.154 "data_size": 63488 00:17:49.154 }, 00:17:49.154 { 00:17:49.154 "name": "BaseBdev4", 00:17:49.154 "uuid": "3bde819e-516e-580e-9852-2980503c2db6", 00:17:49.154 "is_configured": true, 00:17:49.154 "data_offset": 2048, 00:17:49.154 "data_size": 63488 00:17:49.154 } 00:17:49.154 ] 00:17:49.154 }' 00:17:49.154 03:20:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:49.154 03:20:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:49.154 03:20:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:49.414 03:20:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:49.414 03:20:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:50.354 03:20:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:50.354 03:20:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:50.354 03:20:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:50.354 03:20:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:50.354 03:20:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:50.354 03:20:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:50.354 03:20:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.354 03:20:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.354 03:20:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.354 03:20:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.354 03:20:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.354 03:20:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:50.354 "name": "raid_bdev1", 00:17:50.354 "uuid": "d3f26f9e-acd6-4780-a825-683a8973d0d3", 00:17:50.354 "strip_size_kb": 64, 00:17:50.354 "state": "online", 00:17:50.354 "raid_level": "raid5f", 00:17:50.354 "superblock": true, 00:17:50.354 "num_base_bdevs": 4, 00:17:50.354 "num_base_bdevs_discovered": 4, 00:17:50.354 "num_base_bdevs_operational": 4, 00:17:50.354 "process": { 00:17:50.354 "type": "rebuild", 00:17:50.354 "target": "spare", 00:17:50.354 "progress": { 00:17:50.354 "blocks": 86400, 00:17:50.354 "percent": 45 00:17:50.354 } 00:17:50.354 }, 00:17:50.354 "base_bdevs_list": [ 00:17:50.354 { 00:17:50.354 "name": "spare", 00:17:50.354 "uuid": "835a10ba-4a95-51f2-8ff1-036c5c87119e", 00:17:50.354 "is_configured": true, 00:17:50.354 "data_offset": 2048, 00:17:50.354 "data_size": 63488 00:17:50.354 }, 00:17:50.354 { 00:17:50.354 "name": "BaseBdev2", 00:17:50.354 "uuid": "f75d8e1e-c0d9-57ef-ab27-07052f99cbb2", 00:17:50.354 "is_configured": true, 00:17:50.354 "data_offset": 2048, 00:17:50.354 "data_size": 63488 00:17:50.354 }, 00:17:50.354 { 00:17:50.354 "name": "BaseBdev3", 00:17:50.354 "uuid": "4a2364de-5868-5151-b5bd-4449120579ef", 00:17:50.354 "is_configured": true, 00:17:50.354 "data_offset": 2048, 00:17:50.354 "data_size": 63488 00:17:50.354 }, 00:17:50.354 { 00:17:50.354 "name": "BaseBdev4", 00:17:50.354 "uuid": "3bde819e-516e-580e-9852-2980503c2db6", 00:17:50.354 "is_configured": true, 00:17:50.354 "data_offset": 2048, 00:17:50.354 "data_size": 63488 00:17:50.354 } 00:17:50.354 ] 00:17:50.354 }' 00:17:50.354 03:20:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:50.354 03:20:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:50.354 03:20:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:50.354 03:20:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:50.354 03:20:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:51.738 03:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:51.738 03:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:51.738 03:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:51.738 03:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:51.738 03:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:51.738 03:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:51.738 03:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.738 03:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.738 03:20:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.738 03:20:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.738 03:20:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.738 03:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:51.738 "name": "raid_bdev1", 00:17:51.738 "uuid": "d3f26f9e-acd6-4780-a825-683a8973d0d3", 00:17:51.738 "strip_size_kb": 64, 00:17:51.738 "state": "online", 00:17:51.738 "raid_level": "raid5f", 00:17:51.738 "superblock": true, 00:17:51.738 "num_base_bdevs": 4, 00:17:51.738 "num_base_bdevs_discovered": 4, 00:17:51.738 "num_base_bdevs_operational": 4, 00:17:51.738 "process": { 00:17:51.738 "type": "rebuild", 00:17:51.738 "target": "spare", 00:17:51.738 "progress": { 00:17:51.738 "blocks": 107520, 00:17:51.738 "percent": 56 00:17:51.738 } 00:17:51.738 }, 00:17:51.738 "base_bdevs_list": [ 00:17:51.738 { 00:17:51.738 "name": "spare", 00:17:51.738 "uuid": "835a10ba-4a95-51f2-8ff1-036c5c87119e", 00:17:51.738 "is_configured": true, 00:17:51.738 "data_offset": 2048, 00:17:51.738 "data_size": 63488 00:17:51.738 }, 00:17:51.738 { 00:17:51.738 "name": "BaseBdev2", 00:17:51.738 "uuid": "f75d8e1e-c0d9-57ef-ab27-07052f99cbb2", 00:17:51.738 "is_configured": true, 00:17:51.738 "data_offset": 2048, 00:17:51.738 "data_size": 63488 00:17:51.738 }, 00:17:51.738 { 00:17:51.738 "name": "BaseBdev3", 00:17:51.738 "uuid": "4a2364de-5868-5151-b5bd-4449120579ef", 00:17:51.738 "is_configured": true, 00:17:51.738 "data_offset": 2048, 00:17:51.738 "data_size": 63488 00:17:51.738 }, 00:17:51.738 { 00:17:51.738 "name": "BaseBdev4", 00:17:51.738 "uuid": "3bde819e-516e-580e-9852-2980503c2db6", 00:17:51.738 "is_configured": true, 00:17:51.738 "data_offset": 2048, 00:17:51.738 "data_size": 63488 00:17:51.738 } 00:17:51.738 ] 00:17:51.738 }' 00:17:51.738 03:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:51.738 03:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:51.738 03:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:51.738 03:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:51.738 03:20:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:52.678 03:20:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:52.678 03:20:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:52.678 03:20:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.678 03:20:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:52.678 03:20:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:52.678 03:20:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.678 03:20:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.678 03:20:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.678 03:20:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.678 03:20:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.678 03:20:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.678 03:20:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.678 "name": "raid_bdev1", 00:17:52.678 "uuid": "d3f26f9e-acd6-4780-a825-683a8973d0d3", 00:17:52.678 "strip_size_kb": 64, 00:17:52.678 "state": "online", 00:17:52.678 "raid_level": "raid5f", 00:17:52.678 "superblock": true, 00:17:52.678 "num_base_bdevs": 4, 00:17:52.678 "num_base_bdevs_discovered": 4, 00:17:52.678 "num_base_bdevs_operational": 4, 00:17:52.678 "process": { 00:17:52.678 "type": "rebuild", 00:17:52.678 "target": "spare", 00:17:52.678 "progress": { 00:17:52.678 "blocks": 130560, 00:17:52.678 "percent": 68 00:17:52.678 } 00:17:52.678 }, 00:17:52.678 "base_bdevs_list": [ 00:17:52.678 { 00:17:52.678 "name": "spare", 00:17:52.678 "uuid": "835a10ba-4a95-51f2-8ff1-036c5c87119e", 00:17:52.678 "is_configured": true, 00:17:52.678 "data_offset": 2048, 00:17:52.678 "data_size": 63488 00:17:52.678 }, 00:17:52.678 { 00:17:52.678 "name": "BaseBdev2", 00:17:52.678 "uuid": "f75d8e1e-c0d9-57ef-ab27-07052f99cbb2", 00:17:52.678 "is_configured": true, 00:17:52.678 "data_offset": 2048, 00:17:52.678 "data_size": 63488 00:17:52.678 }, 00:17:52.678 { 00:17:52.678 "name": "BaseBdev3", 00:17:52.678 "uuid": "4a2364de-5868-5151-b5bd-4449120579ef", 00:17:52.678 "is_configured": true, 00:17:52.678 "data_offset": 2048, 00:17:52.678 "data_size": 63488 00:17:52.678 }, 00:17:52.678 { 00:17:52.678 "name": "BaseBdev4", 00:17:52.678 "uuid": "3bde819e-516e-580e-9852-2980503c2db6", 00:17:52.678 "is_configured": true, 00:17:52.678 "data_offset": 2048, 00:17:52.678 "data_size": 63488 00:17:52.678 } 00:17:52.678 ] 00:17:52.678 }' 00:17:52.678 03:20:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:52.678 03:20:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:52.678 03:20:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:52.678 03:20:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:52.679 03:20:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:53.617 03:20:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:53.617 03:20:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:53.617 03:20:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:53.617 03:20:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:53.617 03:20:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:53.617 03:20:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:53.617 03:20:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.617 03:20:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.617 03:20:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.617 03:20:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.877 03:20:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.877 03:20:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:53.877 "name": "raid_bdev1", 00:17:53.877 "uuid": "d3f26f9e-acd6-4780-a825-683a8973d0d3", 00:17:53.877 "strip_size_kb": 64, 00:17:53.877 "state": "online", 00:17:53.877 "raid_level": "raid5f", 00:17:53.877 "superblock": true, 00:17:53.877 "num_base_bdevs": 4, 00:17:53.877 "num_base_bdevs_discovered": 4, 00:17:53.877 "num_base_bdevs_operational": 4, 00:17:53.877 "process": { 00:17:53.877 "type": "rebuild", 00:17:53.877 "target": "spare", 00:17:53.877 "progress": { 00:17:53.877 "blocks": 151680, 00:17:53.877 "percent": 79 00:17:53.877 } 00:17:53.877 }, 00:17:53.877 "base_bdevs_list": [ 00:17:53.877 { 00:17:53.877 "name": "spare", 00:17:53.877 "uuid": "835a10ba-4a95-51f2-8ff1-036c5c87119e", 00:17:53.877 "is_configured": true, 00:17:53.877 "data_offset": 2048, 00:17:53.877 "data_size": 63488 00:17:53.877 }, 00:17:53.877 { 00:17:53.877 "name": "BaseBdev2", 00:17:53.877 "uuid": "f75d8e1e-c0d9-57ef-ab27-07052f99cbb2", 00:17:53.877 "is_configured": true, 00:17:53.877 "data_offset": 2048, 00:17:53.877 "data_size": 63488 00:17:53.877 }, 00:17:53.877 { 00:17:53.877 "name": "BaseBdev3", 00:17:53.877 "uuid": "4a2364de-5868-5151-b5bd-4449120579ef", 00:17:53.877 "is_configured": true, 00:17:53.877 "data_offset": 2048, 00:17:53.877 "data_size": 63488 00:17:53.877 }, 00:17:53.877 { 00:17:53.877 "name": "BaseBdev4", 00:17:53.877 "uuid": "3bde819e-516e-580e-9852-2980503c2db6", 00:17:53.877 "is_configured": true, 00:17:53.877 "data_offset": 2048, 00:17:53.877 "data_size": 63488 00:17:53.877 } 00:17:53.877 ] 00:17:53.877 }' 00:17:53.877 03:20:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:53.877 03:20:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:53.877 03:20:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:53.877 03:20:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:53.877 03:20:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:54.854 03:20:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:54.854 03:20:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:54.854 03:20:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:54.854 03:20:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:54.854 03:20:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:54.854 03:20:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:54.854 03:20:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:54.854 03:20:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:54.854 03:20:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.854 03:20:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:54.854 03:20:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.854 03:20:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:54.854 "name": "raid_bdev1", 00:17:54.854 "uuid": "d3f26f9e-acd6-4780-a825-683a8973d0d3", 00:17:54.854 "strip_size_kb": 64, 00:17:54.854 "state": "online", 00:17:54.854 "raid_level": "raid5f", 00:17:54.854 "superblock": true, 00:17:54.854 "num_base_bdevs": 4, 00:17:54.854 "num_base_bdevs_discovered": 4, 00:17:54.854 "num_base_bdevs_operational": 4, 00:17:54.854 "process": { 00:17:54.854 "type": "rebuild", 00:17:54.854 "target": "spare", 00:17:54.854 "progress": { 00:17:54.854 "blocks": 174720, 00:17:54.854 "percent": 91 00:17:54.854 } 00:17:54.854 }, 00:17:54.854 "base_bdevs_list": [ 00:17:54.854 { 00:17:54.854 "name": "spare", 00:17:54.854 "uuid": "835a10ba-4a95-51f2-8ff1-036c5c87119e", 00:17:54.854 "is_configured": true, 00:17:54.854 "data_offset": 2048, 00:17:54.854 "data_size": 63488 00:17:54.854 }, 00:17:54.854 { 00:17:54.854 "name": "BaseBdev2", 00:17:54.854 "uuid": "f75d8e1e-c0d9-57ef-ab27-07052f99cbb2", 00:17:54.854 "is_configured": true, 00:17:54.854 "data_offset": 2048, 00:17:54.854 "data_size": 63488 00:17:54.854 }, 00:17:54.854 { 00:17:54.854 "name": "BaseBdev3", 00:17:54.854 "uuid": "4a2364de-5868-5151-b5bd-4449120579ef", 00:17:54.854 "is_configured": true, 00:17:54.854 "data_offset": 2048, 00:17:54.854 "data_size": 63488 00:17:54.854 }, 00:17:54.854 { 00:17:54.854 "name": "BaseBdev4", 00:17:54.854 "uuid": "3bde819e-516e-580e-9852-2980503c2db6", 00:17:54.854 "is_configured": true, 00:17:54.854 "data_offset": 2048, 00:17:54.854 "data_size": 63488 00:17:54.854 } 00:17:54.854 ] 00:17:54.854 }' 00:17:54.854 03:20:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:55.130 03:20:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:55.130 03:20:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:55.130 03:20:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:55.130 03:20:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:55.701 [2024-10-09 03:20:38.962375] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:55.701 [2024-10-09 03:20:38.962491] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:55.701 [2024-10-09 03:20:38.962634] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:55.960 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:55.960 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:55.960 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:55.960 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:55.960 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:55.960 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:55.960 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.960 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.960 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.960 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.960 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.220 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.220 "name": "raid_bdev1", 00:17:56.220 "uuid": "d3f26f9e-acd6-4780-a825-683a8973d0d3", 00:17:56.220 "strip_size_kb": 64, 00:17:56.220 "state": "online", 00:17:56.220 "raid_level": "raid5f", 00:17:56.220 "superblock": true, 00:17:56.220 "num_base_bdevs": 4, 00:17:56.220 "num_base_bdevs_discovered": 4, 00:17:56.220 "num_base_bdevs_operational": 4, 00:17:56.220 "base_bdevs_list": [ 00:17:56.220 { 00:17:56.220 "name": "spare", 00:17:56.220 "uuid": "835a10ba-4a95-51f2-8ff1-036c5c87119e", 00:17:56.220 "is_configured": true, 00:17:56.220 "data_offset": 2048, 00:17:56.220 "data_size": 63488 00:17:56.220 }, 00:17:56.220 { 00:17:56.220 "name": "BaseBdev2", 00:17:56.220 "uuid": "f75d8e1e-c0d9-57ef-ab27-07052f99cbb2", 00:17:56.220 "is_configured": true, 00:17:56.220 "data_offset": 2048, 00:17:56.220 "data_size": 63488 00:17:56.221 }, 00:17:56.221 { 00:17:56.221 "name": "BaseBdev3", 00:17:56.221 "uuid": "4a2364de-5868-5151-b5bd-4449120579ef", 00:17:56.221 "is_configured": true, 00:17:56.221 "data_offset": 2048, 00:17:56.221 "data_size": 63488 00:17:56.221 }, 00:17:56.221 { 00:17:56.221 "name": "BaseBdev4", 00:17:56.221 "uuid": "3bde819e-516e-580e-9852-2980503c2db6", 00:17:56.221 "is_configured": true, 00:17:56.221 "data_offset": 2048, 00:17:56.221 "data_size": 63488 00:17:56.221 } 00:17:56.221 ] 00:17:56.221 }' 00:17:56.221 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.221 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:56.221 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.221 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:56.221 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:56.221 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:56.221 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.221 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:56.221 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:56.221 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.221 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.221 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.221 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.221 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.221 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.221 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.221 "name": "raid_bdev1", 00:17:56.221 "uuid": "d3f26f9e-acd6-4780-a825-683a8973d0d3", 00:17:56.221 "strip_size_kb": 64, 00:17:56.221 "state": "online", 00:17:56.221 "raid_level": "raid5f", 00:17:56.221 "superblock": true, 00:17:56.221 "num_base_bdevs": 4, 00:17:56.221 "num_base_bdevs_discovered": 4, 00:17:56.221 "num_base_bdevs_operational": 4, 00:17:56.221 "base_bdevs_list": [ 00:17:56.221 { 00:17:56.221 "name": "spare", 00:17:56.221 "uuid": "835a10ba-4a95-51f2-8ff1-036c5c87119e", 00:17:56.221 "is_configured": true, 00:17:56.221 "data_offset": 2048, 00:17:56.221 "data_size": 63488 00:17:56.221 }, 00:17:56.221 { 00:17:56.221 "name": "BaseBdev2", 00:17:56.221 "uuid": "f75d8e1e-c0d9-57ef-ab27-07052f99cbb2", 00:17:56.221 "is_configured": true, 00:17:56.221 "data_offset": 2048, 00:17:56.221 "data_size": 63488 00:17:56.221 }, 00:17:56.221 { 00:17:56.221 "name": "BaseBdev3", 00:17:56.221 "uuid": "4a2364de-5868-5151-b5bd-4449120579ef", 00:17:56.221 "is_configured": true, 00:17:56.221 "data_offset": 2048, 00:17:56.221 "data_size": 63488 00:17:56.221 }, 00:17:56.221 { 00:17:56.221 "name": "BaseBdev4", 00:17:56.221 "uuid": "3bde819e-516e-580e-9852-2980503c2db6", 00:17:56.221 "is_configured": true, 00:17:56.221 "data_offset": 2048, 00:17:56.221 "data_size": 63488 00:17:56.221 } 00:17:56.221 ] 00:17:56.221 }' 00:17:56.221 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.221 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:56.221 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.221 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:56.221 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:56.221 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.221 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.221 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:56.221 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:56.221 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:56.221 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.221 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.221 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.221 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.221 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.221 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.221 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.221 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.481 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.481 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.481 "name": "raid_bdev1", 00:17:56.481 "uuid": "d3f26f9e-acd6-4780-a825-683a8973d0d3", 00:17:56.481 "strip_size_kb": 64, 00:17:56.481 "state": "online", 00:17:56.481 "raid_level": "raid5f", 00:17:56.481 "superblock": true, 00:17:56.481 "num_base_bdevs": 4, 00:17:56.481 "num_base_bdevs_discovered": 4, 00:17:56.481 "num_base_bdevs_operational": 4, 00:17:56.481 "base_bdevs_list": [ 00:17:56.481 { 00:17:56.481 "name": "spare", 00:17:56.481 "uuid": "835a10ba-4a95-51f2-8ff1-036c5c87119e", 00:17:56.481 "is_configured": true, 00:17:56.481 "data_offset": 2048, 00:17:56.481 "data_size": 63488 00:17:56.481 }, 00:17:56.481 { 00:17:56.481 "name": "BaseBdev2", 00:17:56.481 "uuid": "f75d8e1e-c0d9-57ef-ab27-07052f99cbb2", 00:17:56.481 "is_configured": true, 00:17:56.481 "data_offset": 2048, 00:17:56.481 "data_size": 63488 00:17:56.481 }, 00:17:56.481 { 00:17:56.481 "name": "BaseBdev3", 00:17:56.481 "uuid": "4a2364de-5868-5151-b5bd-4449120579ef", 00:17:56.481 "is_configured": true, 00:17:56.481 "data_offset": 2048, 00:17:56.481 "data_size": 63488 00:17:56.481 }, 00:17:56.481 { 00:17:56.481 "name": "BaseBdev4", 00:17:56.481 "uuid": "3bde819e-516e-580e-9852-2980503c2db6", 00:17:56.481 "is_configured": true, 00:17:56.481 "data_offset": 2048, 00:17:56.481 "data_size": 63488 00:17:56.481 } 00:17:56.481 ] 00:17:56.481 }' 00:17:56.481 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.481 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.741 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:56.741 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.741 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.741 [2024-10-09 03:20:39.963825] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:56.741 [2024-10-09 03:20:39.963916] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:56.741 [2024-10-09 03:20:39.964011] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:56.741 [2024-10-09 03:20:39.964112] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:56.741 [2024-10-09 03:20:39.964166] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:56.741 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.741 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.741 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:56.741 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.741 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.741 03:20:39 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.741 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:56.741 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:56.741 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:56.741 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:56.741 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:56.741 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:56.741 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:56.741 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:56.741 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:56.741 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:56.742 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:56.742 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:56.742 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:57.002 /dev/nbd0 00:17:57.002 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:57.002 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:57.002 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:17:57.002 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:17:57.002 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:57.002 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:57.002 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:17:57.002 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:17:57.002 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:57.002 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:57.002 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:57.002 1+0 records in 00:17:57.002 1+0 records out 00:17:57.002 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000526028 s, 7.8 MB/s 00:17:57.002 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:57.002 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:17:57.002 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:57.002 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:57.002 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:17:57.002 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:57.002 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:57.002 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:57.262 /dev/nbd1 00:17:57.262 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:57.262 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:57.262 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:17:57.262 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@869 -- # local i 00:17:57.262 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:17:57.262 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:17:57.262 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:17:57.262 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # break 00:17:57.262 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:17:57.262 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:17:57.262 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:57.262 1+0 records in 00:17:57.262 1+0 records out 00:17:57.262 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000437639 s, 9.4 MB/s 00:17:57.262 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:57.262 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@886 -- # size=4096 00:17:57.262 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:57.262 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:17:57.262 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # return 0 00:17:57.262 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:57.262 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:57.262 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:57.522 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:57.522 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:57.523 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:57.523 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:57.523 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:57.523 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:57.523 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:57.783 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:57.783 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:57.783 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:57.783 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:57.783 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:57.783 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:57.783 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:57.783 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:57.783 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:57.783 03:20:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.043 [2024-10-09 03:20:41.151714] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:58.043 [2024-10-09 03:20:41.151774] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.043 [2024-10-09 03:20:41.151797] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:17:58.043 [2024-10-09 03:20:41.151806] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.043 [2024-10-09 03:20:41.154198] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.043 [2024-10-09 03:20:41.154273] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:58.043 [2024-10-09 03:20:41.154395] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:58.043 [2024-10-09 03:20:41.154462] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:58.043 [2024-10-09 03:20:41.154608] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:58.043 [2024-10-09 03:20:41.154745] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:58.043 [2024-10-09 03:20:41.154873] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:17:58.043 spare 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.043 [2024-10-09 03:20:41.254799] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:58.043 [2024-10-09 03:20:41.254881] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:17:58.043 [2024-10-09 03:20:41.255183] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:17:58.043 [2024-10-09 03:20:41.261585] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:58.043 [2024-10-09 03:20:41.261639] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:58.043 [2024-10-09 03:20:41.261858] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.043 "name": "raid_bdev1", 00:17:58.043 "uuid": "d3f26f9e-acd6-4780-a825-683a8973d0d3", 00:17:58.043 "strip_size_kb": 64, 00:17:58.043 "state": "online", 00:17:58.043 "raid_level": "raid5f", 00:17:58.043 "superblock": true, 00:17:58.043 "num_base_bdevs": 4, 00:17:58.043 "num_base_bdevs_discovered": 4, 00:17:58.043 "num_base_bdevs_operational": 4, 00:17:58.043 "base_bdevs_list": [ 00:17:58.043 { 00:17:58.043 "name": "spare", 00:17:58.043 "uuid": "835a10ba-4a95-51f2-8ff1-036c5c87119e", 00:17:58.043 "is_configured": true, 00:17:58.043 "data_offset": 2048, 00:17:58.043 "data_size": 63488 00:17:58.043 }, 00:17:58.043 { 00:17:58.043 "name": "BaseBdev2", 00:17:58.043 "uuid": "f75d8e1e-c0d9-57ef-ab27-07052f99cbb2", 00:17:58.043 "is_configured": true, 00:17:58.043 "data_offset": 2048, 00:17:58.043 "data_size": 63488 00:17:58.043 }, 00:17:58.043 { 00:17:58.043 "name": "BaseBdev3", 00:17:58.043 "uuid": "4a2364de-5868-5151-b5bd-4449120579ef", 00:17:58.043 "is_configured": true, 00:17:58.043 "data_offset": 2048, 00:17:58.043 "data_size": 63488 00:17:58.043 }, 00:17:58.043 { 00:17:58.043 "name": "BaseBdev4", 00:17:58.043 "uuid": "3bde819e-516e-580e-9852-2980503c2db6", 00:17:58.043 "is_configured": true, 00:17:58.043 "data_offset": 2048, 00:17:58.043 "data_size": 63488 00:17:58.043 } 00:17:58.043 ] 00:17:58.043 }' 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.043 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.613 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:58.613 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:58.613 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:58.613 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:58.613 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:58.613 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.613 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.613 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.613 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.613 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.613 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:58.613 "name": "raid_bdev1", 00:17:58.613 "uuid": "d3f26f9e-acd6-4780-a825-683a8973d0d3", 00:17:58.613 "strip_size_kb": 64, 00:17:58.613 "state": "online", 00:17:58.613 "raid_level": "raid5f", 00:17:58.613 "superblock": true, 00:17:58.613 "num_base_bdevs": 4, 00:17:58.613 "num_base_bdevs_discovered": 4, 00:17:58.613 "num_base_bdevs_operational": 4, 00:17:58.613 "base_bdevs_list": [ 00:17:58.613 { 00:17:58.613 "name": "spare", 00:17:58.613 "uuid": "835a10ba-4a95-51f2-8ff1-036c5c87119e", 00:17:58.613 "is_configured": true, 00:17:58.613 "data_offset": 2048, 00:17:58.613 "data_size": 63488 00:17:58.613 }, 00:17:58.613 { 00:17:58.613 "name": "BaseBdev2", 00:17:58.613 "uuid": "f75d8e1e-c0d9-57ef-ab27-07052f99cbb2", 00:17:58.613 "is_configured": true, 00:17:58.613 "data_offset": 2048, 00:17:58.613 "data_size": 63488 00:17:58.613 }, 00:17:58.613 { 00:17:58.613 "name": "BaseBdev3", 00:17:58.613 "uuid": "4a2364de-5868-5151-b5bd-4449120579ef", 00:17:58.613 "is_configured": true, 00:17:58.613 "data_offset": 2048, 00:17:58.613 "data_size": 63488 00:17:58.613 }, 00:17:58.613 { 00:17:58.613 "name": "BaseBdev4", 00:17:58.613 "uuid": "3bde819e-516e-580e-9852-2980503c2db6", 00:17:58.613 "is_configured": true, 00:17:58.613 "data_offset": 2048, 00:17:58.613 "data_size": 63488 00:17:58.613 } 00:17:58.613 ] 00:17:58.613 }' 00:17:58.613 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:58.613 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:58.613 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:58.613 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:58.613 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:58.613 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.613 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.613 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.613 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.613 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:58.613 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:58.613 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.613 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.613 [2024-10-09 03:20:41.905700] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:58.613 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.613 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:58.613 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.613 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.613 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:58.614 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:58.614 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:58.614 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.614 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.614 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.614 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.873 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.873 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.873 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.873 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.873 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.873 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.873 "name": "raid_bdev1", 00:17:58.873 "uuid": "d3f26f9e-acd6-4780-a825-683a8973d0d3", 00:17:58.873 "strip_size_kb": 64, 00:17:58.873 "state": "online", 00:17:58.873 "raid_level": "raid5f", 00:17:58.873 "superblock": true, 00:17:58.873 "num_base_bdevs": 4, 00:17:58.873 "num_base_bdevs_discovered": 3, 00:17:58.873 "num_base_bdevs_operational": 3, 00:17:58.873 "base_bdevs_list": [ 00:17:58.873 { 00:17:58.873 "name": null, 00:17:58.873 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.873 "is_configured": false, 00:17:58.873 "data_offset": 0, 00:17:58.873 "data_size": 63488 00:17:58.873 }, 00:17:58.873 { 00:17:58.873 "name": "BaseBdev2", 00:17:58.873 "uuid": "f75d8e1e-c0d9-57ef-ab27-07052f99cbb2", 00:17:58.873 "is_configured": true, 00:17:58.873 "data_offset": 2048, 00:17:58.873 "data_size": 63488 00:17:58.873 }, 00:17:58.873 { 00:17:58.873 "name": "BaseBdev3", 00:17:58.873 "uuid": "4a2364de-5868-5151-b5bd-4449120579ef", 00:17:58.873 "is_configured": true, 00:17:58.873 "data_offset": 2048, 00:17:58.873 "data_size": 63488 00:17:58.873 }, 00:17:58.873 { 00:17:58.873 "name": "BaseBdev4", 00:17:58.873 "uuid": "3bde819e-516e-580e-9852-2980503c2db6", 00:17:58.873 "is_configured": true, 00:17:58.873 "data_offset": 2048, 00:17:58.873 "data_size": 63488 00:17:58.873 } 00:17:58.874 ] 00:17:58.874 }' 00:17:58.874 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.874 03:20:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.133 03:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:59.133 03:20:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.133 03:20:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.133 [2024-10-09 03:20:42.293044] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:59.133 [2024-10-09 03:20:42.293199] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:17:59.133 [2024-10-09 03:20:42.293257] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:17:59.133 [2024-10-09 03:20:42.293310] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:59.133 [2024-10-09 03:20:42.306167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:17:59.133 03:20:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.133 03:20:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:17:59.133 [2024-10-09 03:20:42.314556] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:00.074 03:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:00.074 03:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:00.074 03:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:00.074 03:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:00.074 03:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:00.074 03:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.074 03:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.074 03:20:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.074 03:20:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.074 03:20:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.074 03:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:00.074 "name": "raid_bdev1", 00:18:00.074 "uuid": "d3f26f9e-acd6-4780-a825-683a8973d0d3", 00:18:00.074 "strip_size_kb": 64, 00:18:00.074 "state": "online", 00:18:00.074 "raid_level": "raid5f", 00:18:00.074 "superblock": true, 00:18:00.074 "num_base_bdevs": 4, 00:18:00.074 "num_base_bdevs_discovered": 4, 00:18:00.074 "num_base_bdevs_operational": 4, 00:18:00.074 "process": { 00:18:00.074 "type": "rebuild", 00:18:00.074 "target": "spare", 00:18:00.074 "progress": { 00:18:00.074 "blocks": 19200, 00:18:00.074 "percent": 10 00:18:00.074 } 00:18:00.074 }, 00:18:00.074 "base_bdevs_list": [ 00:18:00.074 { 00:18:00.074 "name": "spare", 00:18:00.074 "uuid": "835a10ba-4a95-51f2-8ff1-036c5c87119e", 00:18:00.074 "is_configured": true, 00:18:00.074 "data_offset": 2048, 00:18:00.074 "data_size": 63488 00:18:00.074 }, 00:18:00.074 { 00:18:00.074 "name": "BaseBdev2", 00:18:00.074 "uuid": "f75d8e1e-c0d9-57ef-ab27-07052f99cbb2", 00:18:00.074 "is_configured": true, 00:18:00.074 "data_offset": 2048, 00:18:00.074 "data_size": 63488 00:18:00.074 }, 00:18:00.074 { 00:18:00.074 "name": "BaseBdev3", 00:18:00.074 "uuid": "4a2364de-5868-5151-b5bd-4449120579ef", 00:18:00.074 "is_configured": true, 00:18:00.074 "data_offset": 2048, 00:18:00.074 "data_size": 63488 00:18:00.074 }, 00:18:00.074 { 00:18:00.074 "name": "BaseBdev4", 00:18:00.074 "uuid": "3bde819e-516e-580e-9852-2980503c2db6", 00:18:00.074 "is_configured": true, 00:18:00.074 "data_offset": 2048, 00:18:00.074 "data_size": 63488 00:18:00.074 } 00:18:00.074 ] 00:18:00.074 }' 00:18:00.074 03:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:00.334 03:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:00.334 03:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:00.334 03:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:00.334 03:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:00.334 03:20:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.334 03:20:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.334 [2024-10-09 03:20:43.453340] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:00.334 [2024-10-09 03:20:43.521350] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:00.334 [2024-10-09 03:20:43.521458] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:00.334 [2024-10-09 03:20:43.521492] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:00.334 [2024-10-09 03:20:43.521515] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:00.334 03:20:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.334 03:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:00.334 03:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:00.334 03:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:00.334 03:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:00.334 03:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:00.334 03:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:00.334 03:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:00.334 03:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:00.334 03:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:00.334 03:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:00.334 03:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:00.334 03:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.334 03:20:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.334 03:20:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.334 03:20:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.334 03:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:00.334 "name": "raid_bdev1", 00:18:00.334 "uuid": "d3f26f9e-acd6-4780-a825-683a8973d0d3", 00:18:00.334 "strip_size_kb": 64, 00:18:00.334 "state": "online", 00:18:00.334 "raid_level": "raid5f", 00:18:00.334 "superblock": true, 00:18:00.334 "num_base_bdevs": 4, 00:18:00.334 "num_base_bdevs_discovered": 3, 00:18:00.334 "num_base_bdevs_operational": 3, 00:18:00.334 "base_bdevs_list": [ 00:18:00.334 { 00:18:00.334 "name": null, 00:18:00.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:00.334 "is_configured": false, 00:18:00.334 "data_offset": 0, 00:18:00.334 "data_size": 63488 00:18:00.334 }, 00:18:00.334 { 00:18:00.334 "name": "BaseBdev2", 00:18:00.334 "uuid": "f75d8e1e-c0d9-57ef-ab27-07052f99cbb2", 00:18:00.334 "is_configured": true, 00:18:00.334 "data_offset": 2048, 00:18:00.334 "data_size": 63488 00:18:00.334 }, 00:18:00.334 { 00:18:00.334 "name": "BaseBdev3", 00:18:00.334 "uuid": "4a2364de-5868-5151-b5bd-4449120579ef", 00:18:00.334 "is_configured": true, 00:18:00.334 "data_offset": 2048, 00:18:00.334 "data_size": 63488 00:18:00.334 }, 00:18:00.334 { 00:18:00.334 "name": "BaseBdev4", 00:18:00.334 "uuid": "3bde819e-516e-580e-9852-2980503c2db6", 00:18:00.334 "is_configured": true, 00:18:00.334 "data_offset": 2048, 00:18:00.334 "data_size": 63488 00:18:00.334 } 00:18:00.334 ] 00:18:00.334 }' 00:18:00.334 03:20:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:00.334 03:20:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.904 03:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:00.904 03:20:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:00.904 03:20:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.904 [2024-10-09 03:20:44.010939] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:00.904 [2024-10-09 03:20:44.011041] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:00.904 [2024-10-09 03:20:44.011091] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:18:00.904 [2024-10-09 03:20:44.011122] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:00.904 [2024-10-09 03:20:44.011601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:00.904 [2024-10-09 03:20:44.011664] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:00.904 [2024-10-09 03:20:44.011766] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:00.904 [2024-10-09 03:20:44.011807] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:00.904 [2024-10-09 03:20:44.011860] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:00.904 [2024-10-09 03:20:44.011908] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:00.904 [2024-10-09 03:20:44.024916] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:18:00.904 spare 00:18:00.904 03:20:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:00.904 03:20:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:00.904 [2024-10-09 03:20:44.033489] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:01.844 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:01.844 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:01.844 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:01.844 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:01.844 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.844 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.844 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.844 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.844 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.844 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.844 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.844 "name": "raid_bdev1", 00:18:01.844 "uuid": "d3f26f9e-acd6-4780-a825-683a8973d0d3", 00:18:01.844 "strip_size_kb": 64, 00:18:01.844 "state": "online", 00:18:01.844 "raid_level": "raid5f", 00:18:01.844 "superblock": true, 00:18:01.844 "num_base_bdevs": 4, 00:18:01.844 "num_base_bdevs_discovered": 4, 00:18:01.844 "num_base_bdevs_operational": 4, 00:18:01.844 "process": { 00:18:01.844 "type": "rebuild", 00:18:01.844 "target": "spare", 00:18:01.844 "progress": { 00:18:01.844 "blocks": 19200, 00:18:01.844 "percent": 10 00:18:01.844 } 00:18:01.844 }, 00:18:01.844 "base_bdevs_list": [ 00:18:01.844 { 00:18:01.844 "name": "spare", 00:18:01.844 "uuid": "835a10ba-4a95-51f2-8ff1-036c5c87119e", 00:18:01.844 "is_configured": true, 00:18:01.844 "data_offset": 2048, 00:18:01.844 "data_size": 63488 00:18:01.844 }, 00:18:01.844 { 00:18:01.844 "name": "BaseBdev2", 00:18:01.844 "uuid": "f75d8e1e-c0d9-57ef-ab27-07052f99cbb2", 00:18:01.844 "is_configured": true, 00:18:01.844 "data_offset": 2048, 00:18:01.844 "data_size": 63488 00:18:01.844 }, 00:18:01.844 { 00:18:01.844 "name": "BaseBdev3", 00:18:01.844 "uuid": "4a2364de-5868-5151-b5bd-4449120579ef", 00:18:01.844 "is_configured": true, 00:18:01.844 "data_offset": 2048, 00:18:01.844 "data_size": 63488 00:18:01.844 }, 00:18:01.844 { 00:18:01.844 "name": "BaseBdev4", 00:18:01.844 "uuid": "3bde819e-516e-580e-9852-2980503c2db6", 00:18:01.844 "is_configured": true, 00:18:01.844 "data_offset": 2048, 00:18:01.844 "data_size": 63488 00:18:01.844 } 00:18:01.844 ] 00:18:01.844 }' 00:18:01.844 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.844 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:01.844 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.104 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:02.104 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:02.104 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.104 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.104 [2024-10-09 03:20:45.184388] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:02.104 [2024-10-09 03:20:45.240190] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:02.104 [2024-10-09 03:20:45.240287] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:02.104 [2024-10-09 03:20:45.240324] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:02.104 [2024-10-09 03:20:45.240344] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:02.104 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.104 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:02.104 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:02.104 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:02.104 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:02.104 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:02.104 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:02.104 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:02.104 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:02.104 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:02.104 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:02.104 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.104 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.104 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.104 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.104 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.104 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:02.104 "name": "raid_bdev1", 00:18:02.104 "uuid": "d3f26f9e-acd6-4780-a825-683a8973d0d3", 00:18:02.104 "strip_size_kb": 64, 00:18:02.104 "state": "online", 00:18:02.104 "raid_level": "raid5f", 00:18:02.104 "superblock": true, 00:18:02.104 "num_base_bdevs": 4, 00:18:02.104 "num_base_bdevs_discovered": 3, 00:18:02.104 "num_base_bdevs_operational": 3, 00:18:02.104 "base_bdevs_list": [ 00:18:02.104 { 00:18:02.104 "name": null, 00:18:02.104 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.104 "is_configured": false, 00:18:02.104 "data_offset": 0, 00:18:02.104 "data_size": 63488 00:18:02.104 }, 00:18:02.104 { 00:18:02.104 "name": "BaseBdev2", 00:18:02.104 "uuid": "f75d8e1e-c0d9-57ef-ab27-07052f99cbb2", 00:18:02.104 "is_configured": true, 00:18:02.104 "data_offset": 2048, 00:18:02.104 "data_size": 63488 00:18:02.104 }, 00:18:02.104 { 00:18:02.104 "name": "BaseBdev3", 00:18:02.104 "uuid": "4a2364de-5868-5151-b5bd-4449120579ef", 00:18:02.104 "is_configured": true, 00:18:02.104 "data_offset": 2048, 00:18:02.104 "data_size": 63488 00:18:02.104 }, 00:18:02.104 { 00:18:02.104 "name": "BaseBdev4", 00:18:02.104 "uuid": "3bde819e-516e-580e-9852-2980503c2db6", 00:18:02.104 "is_configured": true, 00:18:02.104 "data_offset": 2048, 00:18:02.104 "data_size": 63488 00:18:02.104 } 00:18:02.104 ] 00:18:02.104 }' 00:18:02.104 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:02.104 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.673 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:02.673 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.673 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:02.673 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:02.673 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.673 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.673 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.673 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.673 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.673 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.673 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.673 "name": "raid_bdev1", 00:18:02.673 "uuid": "d3f26f9e-acd6-4780-a825-683a8973d0d3", 00:18:02.673 "strip_size_kb": 64, 00:18:02.673 "state": "online", 00:18:02.673 "raid_level": "raid5f", 00:18:02.673 "superblock": true, 00:18:02.673 "num_base_bdevs": 4, 00:18:02.673 "num_base_bdevs_discovered": 3, 00:18:02.673 "num_base_bdevs_operational": 3, 00:18:02.673 "base_bdevs_list": [ 00:18:02.673 { 00:18:02.673 "name": null, 00:18:02.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:02.673 "is_configured": false, 00:18:02.673 "data_offset": 0, 00:18:02.673 "data_size": 63488 00:18:02.673 }, 00:18:02.673 { 00:18:02.673 "name": "BaseBdev2", 00:18:02.673 "uuid": "f75d8e1e-c0d9-57ef-ab27-07052f99cbb2", 00:18:02.673 "is_configured": true, 00:18:02.673 "data_offset": 2048, 00:18:02.673 "data_size": 63488 00:18:02.673 }, 00:18:02.673 { 00:18:02.673 "name": "BaseBdev3", 00:18:02.673 "uuid": "4a2364de-5868-5151-b5bd-4449120579ef", 00:18:02.673 "is_configured": true, 00:18:02.673 "data_offset": 2048, 00:18:02.673 "data_size": 63488 00:18:02.673 }, 00:18:02.673 { 00:18:02.673 "name": "BaseBdev4", 00:18:02.673 "uuid": "3bde819e-516e-580e-9852-2980503c2db6", 00:18:02.673 "is_configured": true, 00:18:02.673 "data_offset": 2048, 00:18:02.673 "data_size": 63488 00:18:02.673 } 00:18:02.673 ] 00:18:02.673 }' 00:18:02.673 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.673 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:02.673 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:02.673 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:02.673 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:02.673 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.673 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.673 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.673 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:02.673 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.673 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.673 [2024-10-09 03:20:45.817685] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:02.673 [2024-10-09 03:20:45.817778] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:02.673 [2024-10-09 03:20:45.817806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:18:02.673 [2024-10-09 03:20:45.817816] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:02.673 [2024-10-09 03:20:45.818294] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:02.673 [2024-10-09 03:20:45.818313] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:02.673 [2024-10-09 03:20:45.818385] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:02.673 [2024-10-09 03:20:45.818398] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:02.673 [2024-10-09 03:20:45.818409] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:02.673 [2024-10-09 03:20:45.818419] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:02.673 BaseBdev1 00:18:02.673 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.673 03:20:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:03.613 03:20:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:03.613 03:20:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.613 03:20:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.613 03:20:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:03.613 03:20:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:03.613 03:20:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:03.613 03:20:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.613 03:20:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.613 03:20:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.613 03:20:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.613 03:20:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.613 03:20:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.613 03:20:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.613 03:20:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.613 03:20:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.613 03:20:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.613 "name": "raid_bdev1", 00:18:03.613 "uuid": "d3f26f9e-acd6-4780-a825-683a8973d0d3", 00:18:03.613 "strip_size_kb": 64, 00:18:03.613 "state": "online", 00:18:03.613 "raid_level": "raid5f", 00:18:03.613 "superblock": true, 00:18:03.613 "num_base_bdevs": 4, 00:18:03.613 "num_base_bdevs_discovered": 3, 00:18:03.613 "num_base_bdevs_operational": 3, 00:18:03.613 "base_bdevs_list": [ 00:18:03.613 { 00:18:03.613 "name": null, 00:18:03.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.613 "is_configured": false, 00:18:03.613 "data_offset": 0, 00:18:03.613 "data_size": 63488 00:18:03.613 }, 00:18:03.613 { 00:18:03.613 "name": "BaseBdev2", 00:18:03.613 "uuid": "f75d8e1e-c0d9-57ef-ab27-07052f99cbb2", 00:18:03.613 "is_configured": true, 00:18:03.613 "data_offset": 2048, 00:18:03.613 "data_size": 63488 00:18:03.613 }, 00:18:03.613 { 00:18:03.613 "name": "BaseBdev3", 00:18:03.613 "uuid": "4a2364de-5868-5151-b5bd-4449120579ef", 00:18:03.613 "is_configured": true, 00:18:03.613 "data_offset": 2048, 00:18:03.613 "data_size": 63488 00:18:03.613 }, 00:18:03.613 { 00:18:03.613 "name": "BaseBdev4", 00:18:03.613 "uuid": "3bde819e-516e-580e-9852-2980503c2db6", 00:18:03.613 "is_configured": true, 00:18:03.613 "data_offset": 2048, 00:18:03.613 "data_size": 63488 00:18:03.613 } 00:18:03.613 ] 00:18:03.613 }' 00:18:03.613 03:20:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.613 03:20:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.183 03:20:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:04.183 03:20:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:04.183 03:20:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:04.183 03:20:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:04.183 03:20:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:04.183 03:20:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.183 03:20:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.183 03:20:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.183 03:20:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.183 03:20:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.183 03:20:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:04.183 "name": "raid_bdev1", 00:18:04.183 "uuid": "d3f26f9e-acd6-4780-a825-683a8973d0d3", 00:18:04.183 "strip_size_kb": 64, 00:18:04.183 "state": "online", 00:18:04.183 "raid_level": "raid5f", 00:18:04.183 "superblock": true, 00:18:04.183 "num_base_bdevs": 4, 00:18:04.183 "num_base_bdevs_discovered": 3, 00:18:04.183 "num_base_bdevs_operational": 3, 00:18:04.183 "base_bdevs_list": [ 00:18:04.183 { 00:18:04.183 "name": null, 00:18:04.183 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.183 "is_configured": false, 00:18:04.183 "data_offset": 0, 00:18:04.183 "data_size": 63488 00:18:04.183 }, 00:18:04.183 { 00:18:04.183 "name": "BaseBdev2", 00:18:04.183 "uuid": "f75d8e1e-c0d9-57ef-ab27-07052f99cbb2", 00:18:04.183 "is_configured": true, 00:18:04.183 "data_offset": 2048, 00:18:04.183 "data_size": 63488 00:18:04.183 }, 00:18:04.183 { 00:18:04.183 "name": "BaseBdev3", 00:18:04.183 "uuid": "4a2364de-5868-5151-b5bd-4449120579ef", 00:18:04.183 "is_configured": true, 00:18:04.183 "data_offset": 2048, 00:18:04.183 "data_size": 63488 00:18:04.183 }, 00:18:04.183 { 00:18:04.183 "name": "BaseBdev4", 00:18:04.183 "uuid": "3bde819e-516e-580e-9852-2980503c2db6", 00:18:04.183 "is_configured": true, 00:18:04.183 "data_offset": 2048, 00:18:04.183 "data_size": 63488 00:18:04.183 } 00:18:04.183 ] 00:18:04.183 }' 00:18:04.183 03:20:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:04.183 03:20:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:04.183 03:20:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:04.183 03:20:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:04.183 03:20:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:04.183 03:20:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@650 -- # local es=0 00:18:04.183 03:20:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:04.183 03:20:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:04.183 03:20:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:04.183 03:20:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:04.183 03:20:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:04.183 03:20:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:04.183 03:20:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.183 03:20:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.183 [2024-10-09 03:20:47.427028] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:04.183 [2024-10-09 03:20:47.427131] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:04.183 [2024-10-09 03:20:47.427147] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:04.183 request: 00:18:04.183 { 00:18:04.183 "base_bdev": "BaseBdev1", 00:18:04.183 "raid_bdev": "raid_bdev1", 00:18:04.183 "method": "bdev_raid_add_base_bdev", 00:18:04.183 "req_id": 1 00:18:04.183 } 00:18:04.183 Got JSON-RPC error response 00:18:04.183 response: 00:18:04.183 { 00:18:04.183 "code": -22, 00:18:04.183 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:04.183 } 00:18:04.183 03:20:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:04.183 03:20:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@653 -- # es=1 00:18:04.183 03:20:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:04.183 03:20:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:04.183 03:20:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:04.183 03:20:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:05.565 03:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:05.565 03:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:05.565 03:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:05.565 03:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:05.565 03:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:05.565 03:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:05.565 03:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:05.565 03:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:05.565 03:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:05.565 03:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:05.566 03:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.566 03:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.566 03:20:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.566 03:20:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.566 03:20:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.566 03:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:05.566 "name": "raid_bdev1", 00:18:05.566 "uuid": "d3f26f9e-acd6-4780-a825-683a8973d0d3", 00:18:05.566 "strip_size_kb": 64, 00:18:05.566 "state": "online", 00:18:05.566 "raid_level": "raid5f", 00:18:05.566 "superblock": true, 00:18:05.566 "num_base_bdevs": 4, 00:18:05.566 "num_base_bdevs_discovered": 3, 00:18:05.566 "num_base_bdevs_operational": 3, 00:18:05.566 "base_bdevs_list": [ 00:18:05.566 { 00:18:05.566 "name": null, 00:18:05.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.566 "is_configured": false, 00:18:05.566 "data_offset": 0, 00:18:05.566 "data_size": 63488 00:18:05.566 }, 00:18:05.566 { 00:18:05.566 "name": "BaseBdev2", 00:18:05.566 "uuid": "f75d8e1e-c0d9-57ef-ab27-07052f99cbb2", 00:18:05.566 "is_configured": true, 00:18:05.566 "data_offset": 2048, 00:18:05.566 "data_size": 63488 00:18:05.566 }, 00:18:05.566 { 00:18:05.566 "name": "BaseBdev3", 00:18:05.566 "uuid": "4a2364de-5868-5151-b5bd-4449120579ef", 00:18:05.566 "is_configured": true, 00:18:05.566 "data_offset": 2048, 00:18:05.566 "data_size": 63488 00:18:05.566 }, 00:18:05.566 { 00:18:05.566 "name": "BaseBdev4", 00:18:05.566 "uuid": "3bde819e-516e-580e-9852-2980503c2db6", 00:18:05.566 "is_configured": true, 00:18:05.566 "data_offset": 2048, 00:18:05.566 "data_size": 63488 00:18:05.566 } 00:18:05.566 ] 00:18:05.566 }' 00:18:05.566 03:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:05.566 03:20:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.826 03:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:05.826 03:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:05.826 03:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:05.826 03:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:05.826 03:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:05.826 03:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.826 03:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.826 03:20:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.826 03:20:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.826 03:20:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.826 03:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:05.826 "name": "raid_bdev1", 00:18:05.826 "uuid": "d3f26f9e-acd6-4780-a825-683a8973d0d3", 00:18:05.826 "strip_size_kb": 64, 00:18:05.826 "state": "online", 00:18:05.826 "raid_level": "raid5f", 00:18:05.826 "superblock": true, 00:18:05.826 "num_base_bdevs": 4, 00:18:05.826 "num_base_bdevs_discovered": 3, 00:18:05.826 "num_base_bdevs_operational": 3, 00:18:05.826 "base_bdevs_list": [ 00:18:05.826 { 00:18:05.826 "name": null, 00:18:05.826 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.826 "is_configured": false, 00:18:05.826 "data_offset": 0, 00:18:05.826 "data_size": 63488 00:18:05.826 }, 00:18:05.826 { 00:18:05.826 "name": "BaseBdev2", 00:18:05.826 "uuid": "f75d8e1e-c0d9-57ef-ab27-07052f99cbb2", 00:18:05.826 "is_configured": true, 00:18:05.826 "data_offset": 2048, 00:18:05.826 "data_size": 63488 00:18:05.826 }, 00:18:05.826 { 00:18:05.826 "name": "BaseBdev3", 00:18:05.826 "uuid": "4a2364de-5868-5151-b5bd-4449120579ef", 00:18:05.826 "is_configured": true, 00:18:05.826 "data_offset": 2048, 00:18:05.826 "data_size": 63488 00:18:05.826 }, 00:18:05.826 { 00:18:05.826 "name": "BaseBdev4", 00:18:05.826 "uuid": "3bde819e-516e-580e-9852-2980503c2db6", 00:18:05.826 "is_configured": true, 00:18:05.826 "data_offset": 2048, 00:18:05.826 "data_size": 63488 00:18:05.826 } 00:18:05.826 ] 00:18:05.826 }' 00:18:05.826 03:20:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:05.826 03:20:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:05.826 03:20:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:05.826 03:20:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:05.826 03:20:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85394 00:18:05.826 03:20:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@950 -- # '[' -z 85394 ']' 00:18:05.826 03:20:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # kill -0 85394 00:18:05.826 03:20:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # uname 00:18:05.826 03:20:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:05.826 03:20:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 85394 00:18:05.826 03:20:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:05.826 killing process with pid 85394 00:18:05.826 Received shutdown signal, test time was about 60.000000 seconds 00:18:05.826 00:18:05.826 Latency(us) 00:18:05.826 [2024-10-09T03:20:49.129Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:05.826 [2024-10-09T03:20:49.129Z] =================================================================================================================== 00:18:05.826 [2024-10-09T03:20:49.129Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:05.826 03:20:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:05.826 03:20:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@968 -- # echo 'killing process with pid 85394' 00:18:05.826 03:20:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@969 -- # kill 85394 00:18:05.826 [2024-10-09 03:20:49.094640] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:05.826 [2024-10-09 03:20:49.094733] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:05.826 [2024-10-09 03:20:49.094795] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:05.826 03:20:49 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@974 -- # wait 85394 00:18:05.826 [2024-10-09 03:20:49.094808] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:06.397 [2024-10-09 03:20:49.595577] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:07.779 03:20:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:18:07.779 00:18:07.779 real 0m27.139s 00:18:07.779 user 0m33.669s 00:18:07.779 sys 0m3.336s 00:18:07.779 ************************************ 00:18:07.779 END TEST raid5f_rebuild_test_sb 00:18:07.779 ************************************ 00:18:07.779 03:20:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:07.779 03:20:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.779 03:20:50 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:18:07.779 03:20:50 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:18:07.779 03:20:50 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:07.779 03:20:50 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:07.779 03:20:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:07.779 ************************************ 00:18:07.779 START TEST raid_state_function_test_sb_4k 00:18:07.779 ************************************ 00:18:07.779 03:20:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:18:07.779 03:20:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:07.779 03:20:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:07.779 03:20:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:07.779 03:20:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:07.779 03:20:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:07.779 03:20:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:07.779 03:20:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:07.779 03:20:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:07.779 03:20:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:07.779 03:20:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:07.779 03:20:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:07.779 03:20:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:07.779 03:20:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:07.779 03:20:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:07.779 03:20:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:07.779 03:20:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:07.779 03:20:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:07.779 03:20:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:07.779 03:20:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:07.779 03:20:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:07.779 03:20:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:07.779 03:20:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:07.779 Process raid pid: 86210 00:18:07.779 03:20:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86210 00:18:07.779 03:20:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:07.779 03:20:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86210' 00:18:07.779 03:20:50 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86210 00:18:07.779 03:20:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 86210 ']' 00:18:07.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:07.779 03:20:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:07.779 03:20:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:07.779 03:20:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:07.779 03:20:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:07.779 03:20:50 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:07.779 [2024-10-09 03:20:51.071983] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:18:07.779 [2024-10-09 03:20:51.072206] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:08.039 [2024-10-09 03:20:51.239222] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:08.299 [2024-10-09 03:20:51.483001] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:08.558 [2024-10-09 03:20:51.717132] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:08.558 [2024-10-09 03:20:51.717170] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:08.818 03:20:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:08.818 03:20:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:18:08.818 03:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:08.818 03:20:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.818 03:20:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.818 [2024-10-09 03:20:51.909421] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:08.818 [2024-10-09 03:20:51.909475] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:08.818 [2024-10-09 03:20:51.909491] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:08.818 [2024-10-09 03:20:51.909500] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:08.818 03:20:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.818 03:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:08.818 03:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:08.818 03:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:08.818 03:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:08.818 03:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:08.818 03:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:08.818 03:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:08.818 03:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:08.818 03:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:08.818 03:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:08.818 03:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:08.818 03:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:08.818 03:20:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.818 03:20:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:08.818 03:20:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.818 03:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:08.818 "name": "Existed_Raid", 00:18:08.818 "uuid": "9628ac2c-17a5-49e6-8ce4-4c1fd630176b", 00:18:08.818 "strip_size_kb": 0, 00:18:08.818 "state": "configuring", 00:18:08.818 "raid_level": "raid1", 00:18:08.818 "superblock": true, 00:18:08.818 "num_base_bdevs": 2, 00:18:08.818 "num_base_bdevs_discovered": 0, 00:18:08.818 "num_base_bdevs_operational": 2, 00:18:08.818 "base_bdevs_list": [ 00:18:08.818 { 00:18:08.818 "name": "BaseBdev1", 00:18:08.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.818 "is_configured": false, 00:18:08.818 "data_offset": 0, 00:18:08.818 "data_size": 0 00:18:08.818 }, 00:18:08.818 { 00:18:08.818 "name": "BaseBdev2", 00:18:08.818 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:08.818 "is_configured": false, 00:18:08.818 "data_offset": 0, 00:18:08.818 "data_size": 0 00:18:08.818 } 00:18:08.818 ] 00:18:08.818 }' 00:18:08.818 03:20:51 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:08.818 03:20:51 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.078 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:09.078 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.078 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.078 [2024-10-09 03:20:52.312720] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:09.078 [2024-10-09 03:20:52.312822] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:09.078 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.078 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:09.078 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.078 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.078 [2024-10-09 03:20:52.324734] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:09.078 [2024-10-09 03:20:52.324816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:09.078 [2024-10-09 03:20:52.324849] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:09.078 [2024-10-09 03:20:52.324874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:09.078 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.078 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:18:09.078 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.078 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.337 [2024-10-09 03:20:52.413300] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:09.337 BaseBdev1 00:18:09.337 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.337 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:09.337 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:09.337 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:09.337 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:18:09.337 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:09.337 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:09.337 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:09.337 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.337 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.337 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.337 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:09.337 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.337 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.337 [ 00:18:09.337 { 00:18:09.337 "name": "BaseBdev1", 00:18:09.337 "aliases": [ 00:18:09.337 "b412bf72-cd00-4052-ba2d-fcbcbd3f712d" 00:18:09.337 ], 00:18:09.337 "product_name": "Malloc disk", 00:18:09.337 "block_size": 4096, 00:18:09.337 "num_blocks": 8192, 00:18:09.337 "uuid": "b412bf72-cd00-4052-ba2d-fcbcbd3f712d", 00:18:09.337 "assigned_rate_limits": { 00:18:09.337 "rw_ios_per_sec": 0, 00:18:09.337 "rw_mbytes_per_sec": 0, 00:18:09.337 "r_mbytes_per_sec": 0, 00:18:09.337 "w_mbytes_per_sec": 0 00:18:09.337 }, 00:18:09.337 "claimed": true, 00:18:09.337 "claim_type": "exclusive_write", 00:18:09.338 "zoned": false, 00:18:09.338 "supported_io_types": { 00:18:09.338 "read": true, 00:18:09.338 "write": true, 00:18:09.338 "unmap": true, 00:18:09.338 "flush": true, 00:18:09.338 "reset": true, 00:18:09.338 "nvme_admin": false, 00:18:09.338 "nvme_io": false, 00:18:09.338 "nvme_io_md": false, 00:18:09.338 "write_zeroes": true, 00:18:09.338 "zcopy": true, 00:18:09.338 "get_zone_info": false, 00:18:09.338 "zone_management": false, 00:18:09.338 "zone_append": false, 00:18:09.338 "compare": false, 00:18:09.338 "compare_and_write": false, 00:18:09.338 "abort": true, 00:18:09.338 "seek_hole": false, 00:18:09.338 "seek_data": false, 00:18:09.338 "copy": true, 00:18:09.338 "nvme_iov_md": false 00:18:09.338 }, 00:18:09.338 "memory_domains": [ 00:18:09.338 { 00:18:09.338 "dma_device_id": "system", 00:18:09.338 "dma_device_type": 1 00:18:09.338 }, 00:18:09.338 { 00:18:09.338 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:09.338 "dma_device_type": 2 00:18:09.338 } 00:18:09.338 ], 00:18:09.338 "driver_specific": {} 00:18:09.338 } 00:18:09.338 ] 00:18:09.338 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.338 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:18:09.338 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:09.338 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:09.338 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:09.338 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.338 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.338 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:09.338 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.338 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.338 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.338 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.338 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.338 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.338 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.338 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.338 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.338 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.338 "name": "Existed_Raid", 00:18:09.338 "uuid": "c550b5da-c67e-4073-878d-d32abe94e2b3", 00:18:09.338 "strip_size_kb": 0, 00:18:09.338 "state": "configuring", 00:18:09.338 "raid_level": "raid1", 00:18:09.338 "superblock": true, 00:18:09.338 "num_base_bdevs": 2, 00:18:09.338 "num_base_bdevs_discovered": 1, 00:18:09.338 "num_base_bdevs_operational": 2, 00:18:09.338 "base_bdevs_list": [ 00:18:09.338 { 00:18:09.338 "name": "BaseBdev1", 00:18:09.338 "uuid": "b412bf72-cd00-4052-ba2d-fcbcbd3f712d", 00:18:09.338 "is_configured": true, 00:18:09.338 "data_offset": 256, 00:18:09.338 "data_size": 7936 00:18:09.338 }, 00:18:09.338 { 00:18:09.338 "name": "BaseBdev2", 00:18:09.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.338 "is_configured": false, 00:18:09.338 "data_offset": 0, 00:18:09.338 "data_size": 0 00:18:09.338 } 00:18:09.338 ] 00:18:09.338 }' 00:18:09.338 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.338 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.599 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:09.599 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.599 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.599 [2024-10-09 03:20:52.868865] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:09.599 [2024-10-09 03:20:52.868954] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:09.599 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.599 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:09.599 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.599 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.599 [2024-10-09 03:20:52.880911] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:09.599 [2024-10-09 03:20:52.882910] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:09.599 [2024-10-09 03:20:52.882982] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:09.599 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.599 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:09.599 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:09.599 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:09.599 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:09.599 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:09.599 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:09.599 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:09.599 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:09.599 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.599 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.599 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.599 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.599 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.599 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.599 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.599 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:09.868 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.868 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.868 "name": "Existed_Raid", 00:18:09.868 "uuid": "26962686-5fa1-4fbd-b54c-9beabf7663df", 00:18:09.868 "strip_size_kb": 0, 00:18:09.868 "state": "configuring", 00:18:09.868 "raid_level": "raid1", 00:18:09.868 "superblock": true, 00:18:09.868 "num_base_bdevs": 2, 00:18:09.868 "num_base_bdevs_discovered": 1, 00:18:09.868 "num_base_bdevs_operational": 2, 00:18:09.868 "base_bdevs_list": [ 00:18:09.868 { 00:18:09.868 "name": "BaseBdev1", 00:18:09.868 "uuid": "b412bf72-cd00-4052-ba2d-fcbcbd3f712d", 00:18:09.868 "is_configured": true, 00:18:09.868 "data_offset": 256, 00:18:09.868 "data_size": 7936 00:18:09.868 }, 00:18:09.868 { 00:18:09.868 "name": "BaseBdev2", 00:18:09.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.868 "is_configured": false, 00:18:09.868 "data_offset": 0, 00:18:09.868 "data_size": 0 00:18:09.868 } 00:18:09.868 ] 00:18:09.868 }' 00:18:09.868 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.868 03:20:52 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.144 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:18:10.144 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.144 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.144 [2024-10-09 03:20:53.351250] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:10.144 [2024-10-09 03:20:53.351566] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:10.144 [2024-10-09 03:20:53.351624] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:10.144 [2024-10-09 03:20:53.351937] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:10.144 [2024-10-09 03:20:53.352151] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:10.144 [2024-10-09 03:20:53.352197] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raBaseBdev2 00:18:10.144 id_bdev 0x617000007e80 00:18:10.144 [2024-10-09 03:20:53.352379] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:10.144 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.144 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:10.144 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:10.144 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:10.144 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@901 -- # local i 00:18:10.144 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:10.144 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:10.144 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:10.144 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.144 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.144 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.145 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:10.145 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.145 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.145 [ 00:18:10.145 { 00:18:10.145 "name": "BaseBdev2", 00:18:10.145 "aliases": [ 00:18:10.145 "5b50cd11-0bb8-46e6-83b0-083b4929091e" 00:18:10.145 ], 00:18:10.145 "product_name": "Malloc disk", 00:18:10.145 "block_size": 4096, 00:18:10.145 "num_blocks": 8192, 00:18:10.145 "uuid": "5b50cd11-0bb8-46e6-83b0-083b4929091e", 00:18:10.145 "assigned_rate_limits": { 00:18:10.145 "rw_ios_per_sec": 0, 00:18:10.145 "rw_mbytes_per_sec": 0, 00:18:10.145 "r_mbytes_per_sec": 0, 00:18:10.145 "w_mbytes_per_sec": 0 00:18:10.145 }, 00:18:10.145 "claimed": true, 00:18:10.145 "claim_type": "exclusive_write", 00:18:10.145 "zoned": false, 00:18:10.145 "supported_io_types": { 00:18:10.145 "read": true, 00:18:10.145 "write": true, 00:18:10.145 "unmap": true, 00:18:10.145 "flush": true, 00:18:10.145 "reset": true, 00:18:10.145 "nvme_admin": false, 00:18:10.145 "nvme_io": false, 00:18:10.145 "nvme_io_md": false, 00:18:10.145 "write_zeroes": true, 00:18:10.145 "zcopy": true, 00:18:10.145 "get_zone_info": false, 00:18:10.145 "zone_management": false, 00:18:10.145 "zone_append": false, 00:18:10.145 "compare": false, 00:18:10.145 "compare_and_write": false, 00:18:10.145 "abort": true, 00:18:10.145 "seek_hole": false, 00:18:10.145 "seek_data": false, 00:18:10.145 "copy": true, 00:18:10.145 "nvme_iov_md": false 00:18:10.145 }, 00:18:10.145 "memory_domains": [ 00:18:10.145 { 00:18:10.145 "dma_device_id": "system", 00:18:10.145 "dma_device_type": 1 00:18:10.145 }, 00:18:10.145 { 00:18:10.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.145 "dma_device_type": 2 00:18:10.145 } 00:18:10.145 ], 00:18:10.145 "driver_specific": {} 00:18:10.145 } 00:18:10.145 ] 00:18:10.145 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.145 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@907 -- # return 0 00:18:10.145 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:10.145 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:10.145 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:10.145 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:10.145 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.145 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:10.145 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:10.145 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:10.145 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.145 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.145 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.145 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.145 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.145 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:10.145 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.145 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.145 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.145 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.145 "name": "Existed_Raid", 00:18:10.145 "uuid": "26962686-5fa1-4fbd-b54c-9beabf7663df", 00:18:10.145 "strip_size_kb": 0, 00:18:10.145 "state": "online", 00:18:10.145 "raid_level": "raid1", 00:18:10.145 "superblock": true, 00:18:10.145 "num_base_bdevs": 2, 00:18:10.145 "num_base_bdevs_discovered": 2, 00:18:10.145 "num_base_bdevs_operational": 2, 00:18:10.145 "base_bdevs_list": [ 00:18:10.145 { 00:18:10.145 "name": "BaseBdev1", 00:18:10.145 "uuid": "b412bf72-cd00-4052-ba2d-fcbcbd3f712d", 00:18:10.145 "is_configured": true, 00:18:10.145 "data_offset": 256, 00:18:10.145 "data_size": 7936 00:18:10.145 }, 00:18:10.145 { 00:18:10.145 "name": "BaseBdev2", 00:18:10.145 "uuid": "5b50cd11-0bb8-46e6-83b0-083b4929091e", 00:18:10.145 "is_configured": true, 00:18:10.145 "data_offset": 256, 00:18:10.145 "data_size": 7936 00:18:10.145 } 00:18:10.145 ] 00:18:10.145 }' 00:18:10.145 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.145 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.715 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:10.715 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:10.715 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:10.715 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:10.715 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:10.715 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:10.715 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:10.715 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:10.715 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.715 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.715 [2024-10-09 03:20:53.806917] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:10.715 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.715 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:10.715 "name": "Existed_Raid", 00:18:10.715 "aliases": [ 00:18:10.715 "26962686-5fa1-4fbd-b54c-9beabf7663df" 00:18:10.715 ], 00:18:10.715 "product_name": "Raid Volume", 00:18:10.715 "block_size": 4096, 00:18:10.715 "num_blocks": 7936, 00:18:10.715 "uuid": "26962686-5fa1-4fbd-b54c-9beabf7663df", 00:18:10.715 "assigned_rate_limits": { 00:18:10.715 "rw_ios_per_sec": 0, 00:18:10.715 "rw_mbytes_per_sec": 0, 00:18:10.715 "r_mbytes_per_sec": 0, 00:18:10.715 "w_mbytes_per_sec": 0 00:18:10.715 }, 00:18:10.715 "claimed": false, 00:18:10.715 "zoned": false, 00:18:10.715 "supported_io_types": { 00:18:10.715 "read": true, 00:18:10.715 "write": true, 00:18:10.715 "unmap": false, 00:18:10.715 "flush": false, 00:18:10.715 "reset": true, 00:18:10.715 "nvme_admin": false, 00:18:10.715 "nvme_io": false, 00:18:10.715 "nvme_io_md": false, 00:18:10.715 "write_zeroes": true, 00:18:10.715 "zcopy": false, 00:18:10.715 "get_zone_info": false, 00:18:10.715 "zone_management": false, 00:18:10.715 "zone_append": false, 00:18:10.715 "compare": false, 00:18:10.715 "compare_and_write": false, 00:18:10.715 "abort": false, 00:18:10.715 "seek_hole": false, 00:18:10.715 "seek_data": false, 00:18:10.715 "copy": false, 00:18:10.715 "nvme_iov_md": false 00:18:10.715 }, 00:18:10.715 "memory_domains": [ 00:18:10.715 { 00:18:10.715 "dma_device_id": "system", 00:18:10.715 "dma_device_type": 1 00:18:10.715 }, 00:18:10.715 { 00:18:10.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.715 "dma_device_type": 2 00:18:10.715 }, 00:18:10.715 { 00:18:10.715 "dma_device_id": "system", 00:18:10.715 "dma_device_type": 1 00:18:10.715 }, 00:18:10.715 { 00:18:10.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.715 "dma_device_type": 2 00:18:10.715 } 00:18:10.715 ], 00:18:10.715 "driver_specific": { 00:18:10.715 "raid": { 00:18:10.715 "uuid": "26962686-5fa1-4fbd-b54c-9beabf7663df", 00:18:10.715 "strip_size_kb": 0, 00:18:10.715 "state": "online", 00:18:10.715 "raid_level": "raid1", 00:18:10.715 "superblock": true, 00:18:10.715 "num_base_bdevs": 2, 00:18:10.715 "num_base_bdevs_discovered": 2, 00:18:10.715 "num_base_bdevs_operational": 2, 00:18:10.715 "base_bdevs_list": [ 00:18:10.715 { 00:18:10.715 "name": "BaseBdev1", 00:18:10.715 "uuid": "b412bf72-cd00-4052-ba2d-fcbcbd3f712d", 00:18:10.715 "is_configured": true, 00:18:10.715 "data_offset": 256, 00:18:10.715 "data_size": 7936 00:18:10.715 }, 00:18:10.715 { 00:18:10.715 "name": "BaseBdev2", 00:18:10.715 "uuid": "5b50cd11-0bb8-46e6-83b0-083b4929091e", 00:18:10.715 "is_configured": true, 00:18:10.715 "data_offset": 256, 00:18:10.715 "data_size": 7936 00:18:10.715 } 00:18:10.715 ] 00:18:10.715 } 00:18:10.715 } 00:18:10.715 }' 00:18:10.715 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:10.715 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:10.715 BaseBdev2' 00:18:10.715 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:10.715 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:10.715 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:10.715 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:10.715 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.715 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:10.715 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.715 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.715 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:10.715 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:10.715 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:10.715 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:10.715 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:10.715 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.715 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.715 03:20:53 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.715 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:10.715 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:10.715 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:10.715 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.715 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.715 [2024-10-09 03:20:54.014210] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:10.975 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.975 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:10.975 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:10.975 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:10.975 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:18:10.975 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:10.975 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:10.975 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:10.975 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:10.975 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:10.975 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:10.975 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:10.975 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.975 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.975 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.975 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.975 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.975 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.975 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:10.975 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:10.975 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.975 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.975 "name": "Existed_Raid", 00:18:10.975 "uuid": "26962686-5fa1-4fbd-b54c-9beabf7663df", 00:18:10.975 "strip_size_kb": 0, 00:18:10.975 "state": "online", 00:18:10.975 "raid_level": "raid1", 00:18:10.975 "superblock": true, 00:18:10.975 "num_base_bdevs": 2, 00:18:10.975 "num_base_bdevs_discovered": 1, 00:18:10.975 "num_base_bdevs_operational": 1, 00:18:10.975 "base_bdevs_list": [ 00:18:10.975 { 00:18:10.975 "name": null, 00:18:10.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.975 "is_configured": false, 00:18:10.975 "data_offset": 0, 00:18:10.975 "data_size": 7936 00:18:10.975 }, 00:18:10.975 { 00:18:10.975 "name": "BaseBdev2", 00:18:10.975 "uuid": "5b50cd11-0bb8-46e6-83b0-083b4929091e", 00:18:10.975 "is_configured": true, 00:18:10.975 "data_offset": 256, 00:18:10.975 "data_size": 7936 00:18:10.975 } 00:18:10.975 ] 00:18:10.975 }' 00:18:10.975 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.975 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.545 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:11.545 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:11.545 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.545 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.545 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.545 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:11.545 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.545 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:11.545 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:11.545 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:11.545 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.545 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.545 [2024-10-09 03:20:54.600933] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:11.545 [2024-10-09 03:20:54.601128] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:11.545 [2024-10-09 03:20:54.701853] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:11.545 [2024-10-09 03:20:54.701911] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:11.545 [2024-10-09 03:20:54.701925] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:11.545 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.545 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:11.545 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:11.545 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.545 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:11.545 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.545 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:11.545 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.545 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:11.545 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:11.545 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:11.545 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86210 00:18:11.545 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 86210 ']' 00:18:11.545 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 86210 00:18:11.545 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:18:11.545 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:11.545 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86210 00:18:11.545 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:11.545 killing process with pid 86210 00:18:11.545 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:11.545 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86210' 00:18:11.545 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@969 -- # kill 86210 00:18:11.545 [2024-10-09 03:20:54.796271] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:11.545 03:20:54 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@974 -- # wait 86210 00:18:11.545 [2024-10-09 03:20:54.811984] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:12.928 03:20:56 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:18:12.928 00:18:12.928 real 0m5.149s 00:18:12.928 user 0m7.138s 00:18:12.928 sys 0m0.943s 00:18:12.928 03:20:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:12.928 ************************************ 00:18:12.928 END TEST raid_state_function_test_sb_4k 00:18:12.928 ************************************ 00:18:12.928 03:20:56 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:12.928 03:20:56 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:18:12.928 03:20:56 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:12.928 03:20:56 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:12.928 03:20:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:12.928 ************************************ 00:18:12.928 START TEST raid_superblock_test_4k 00:18:12.928 ************************************ 00:18:12.928 03:20:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:18:12.928 03:20:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:12.928 03:20:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:12.928 03:20:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:12.928 03:20:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:12.928 03:20:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:12.928 03:20:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:12.928 03:20:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:12.928 03:20:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:12.928 03:20:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:12.928 03:20:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:12.928 03:20:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:12.928 03:20:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:12.928 03:20:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:12.928 03:20:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:12.928 03:20:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:12.928 03:20:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86462 00:18:12.928 03:20:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:12.928 03:20:56 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86462 00:18:12.928 03:20:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@831 -- # '[' -z 86462 ']' 00:18:12.928 03:20:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:12.928 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:12.928 03:20:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:12.928 03:20:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:12.928 03:20:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:12.928 03:20:56 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.188 [2024-10-09 03:20:56.294828] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:18:13.188 [2024-10-09 03:20:56.294944] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86462 ] 00:18:13.188 [2024-10-09 03:20:56.456508] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.447 [2024-10-09 03:20:56.690769] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.707 [2024-10-09 03:20:56.889012] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:13.707 [2024-10-09 03:20:56.889128] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:13.967 03:20:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:13.967 03:20:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # return 0 00:18:13.967 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:13.967 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:13.967 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:13.967 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:13.967 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:13.967 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:13.967 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:13.967 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:13.967 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:18:13.967 03:20:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.967 03:20:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.967 malloc1 00:18:13.967 03:20:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.967 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:13.967 03:20:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.967 03:20:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.967 [2024-10-09 03:20:57.179335] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:13.967 [2024-10-09 03:20:57.179480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.967 [2024-10-09 03:20:57.179523] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:13.967 [2024-10-09 03:20:57.179554] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.967 [2024-10-09 03:20:57.181876] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.967 [2024-10-09 03:20:57.181945] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:13.967 pt1 00:18:13.967 03:20:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.967 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:13.967 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:13.967 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:13.967 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:13.967 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:13.967 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:13.967 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:13.967 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:13.967 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:18:13.967 03:20:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.967 03:20:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:13.967 malloc2 00:18:13.967 03:20:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.967 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:13.967 03:20:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.967 03:20:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.228 [2024-10-09 03:20:57.273376] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:14.228 [2024-10-09 03:20:57.273432] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:14.228 [2024-10-09 03:20:57.273458] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:14.228 [2024-10-09 03:20:57.273467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:14.228 [2024-10-09 03:20:57.275761] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:14.228 [2024-10-09 03:20:57.275799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:14.228 pt2 00:18:14.228 03:20:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.228 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:14.228 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:14.228 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:14.228 03:20:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.228 03:20:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.228 [2024-10-09 03:20:57.285418] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:14.228 [2024-10-09 03:20:57.287438] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:14.228 [2024-10-09 03:20:57.287608] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:14.228 [2024-10-09 03:20:57.287621] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:14.228 [2024-10-09 03:20:57.287854] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:14.228 [2024-10-09 03:20:57.288021] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:14.228 [2024-10-09 03:20:57.288035] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:14.228 [2024-10-09 03:20:57.288173] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:14.228 03:20:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.228 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:14.228 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:14.228 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:14.228 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:14.228 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:14.228 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:14.228 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.228 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.228 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.228 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.228 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.228 03:20:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.228 03:20:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.228 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.228 03:20:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.228 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.228 "name": "raid_bdev1", 00:18:14.228 "uuid": "5e4af749-073f-465e-95a9-ee36b6caa92d", 00:18:14.228 "strip_size_kb": 0, 00:18:14.228 "state": "online", 00:18:14.228 "raid_level": "raid1", 00:18:14.228 "superblock": true, 00:18:14.228 "num_base_bdevs": 2, 00:18:14.228 "num_base_bdevs_discovered": 2, 00:18:14.228 "num_base_bdevs_operational": 2, 00:18:14.228 "base_bdevs_list": [ 00:18:14.228 { 00:18:14.228 "name": "pt1", 00:18:14.228 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:14.228 "is_configured": true, 00:18:14.228 "data_offset": 256, 00:18:14.228 "data_size": 7936 00:18:14.228 }, 00:18:14.228 { 00:18:14.228 "name": "pt2", 00:18:14.228 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:14.228 "is_configured": true, 00:18:14.228 "data_offset": 256, 00:18:14.228 "data_size": 7936 00:18:14.228 } 00:18:14.228 ] 00:18:14.228 }' 00:18:14.228 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.228 03:20:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.488 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:14.488 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:14.488 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:14.488 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:14.488 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:14.488 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:14.488 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:14.488 03:20:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.488 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:14.488 03:20:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.488 [2024-10-09 03:20:57.736935] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:14.488 03:20:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.488 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:14.488 "name": "raid_bdev1", 00:18:14.488 "aliases": [ 00:18:14.488 "5e4af749-073f-465e-95a9-ee36b6caa92d" 00:18:14.488 ], 00:18:14.488 "product_name": "Raid Volume", 00:18:14.488 "block_size": 4096, 00:18:14.488 "num_blocks": 7936, 00:18:14.488 "uuid": "5e4af749-073f-465e-95a9-ee36b6caa92d", 00:18:14.488 "assigned_rate_limits": { 00:18:14.488 "rw_ios_per_sec": 0, 00:18:14.488 "rw_mbytes_per_sec": 0, 00:18:14.488 "r_mbytes_per_sec": 0, 00:18:14.488 "w_mbytes_per_sec": 0 00:18:14.488 }, 00:18:14.488 "claimed": false, 00:18:14.488 "zoned": false, 00:18:14.488 "supported_io_types": { 00:18:14.488 "read": true, 00:18:14.488 "write": true, 00:18:14.488 "unmap": false, 00:18:14.488 "flush": false, 00:18:14.488 "reset": true, 00:18:14.488 "nvme_admin": false, 00:18:14.488 "nvme_io": false, 00:18:14.488 "nvme_io_md": false, 00:18:14.488 "write_zeroes": true, 00:18:14.488 "zcopy": false, 00:18:14.488 "get_zone_info": false, 00:18:14.488 "zone_management": false, 00:18:14.488 "zone_append": false, 00:18:14.488 "compare": false, 00:18:14.488 "compare_and_write": false, 00:18:14.488 "abort": false, 00:18:14.488 "seek_hole": false, 00:18:14.488 "seek_data": false, 00:18:14.488 "copy": false, 00:18:14.488 "nvme_iov_md": false 00:18:14.488 }, 00:18:14.488 "memory_domains": [ 00:18:14.488 { 00:18:14.488 "dma_device_id": "system", 00:18:14.489 "dma_device_type": 1 00:18:14.489 }, 00:18:14.489 { 00:18:14.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.489 "dma_device_type": 2 00:18:14.489 }, 00:18:14.489 { 00:18:14.489 "dma_device_id": "system", 00:18:14.489 "dma_device_type": 1 00:18:14.489 }, 00:18:14.489 { 00:18:14.489 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.489 "dma_device_type": 2 00:18:14.489 } 00:18:14.489 ], 00:18:14.489 "driver_specific": { 00:18:14.489 "raid": { 00:18:14.489 "uuid": "5e4af749-073f-465e-95a9-ee36b6caa92d", 00:18:14.489 "strip_size_kb": 0, 00:18:14.489 "state": "online", 00:18:14.489 "raid_level": "raid1", 00:18:14.489 "superblock": true, 00:18:14.489 "num_base_bdevs": 2, 00:18:14.489 "num_base_bdevs_discovered": 2, 00:18:14.489 "num_base_bdevs_operational": 2, 00:18:14.489 "base_bdevs_list": [ 00:18:14.489 { 00:18:14.489 "name": "pt1", 00:18:14.489 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:14.489 "is_configured": true, 00:18:14.489 "data_offset": 256, 00:18:14.489 "data_size": 7936 00:18:14.489 }, 00:18:14.489 { 00:18:14.489 "name": "pt2", 00:18:14.489 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:14.489 "is_configured": true, 00:18:14.489 "data_offset": 256, 00:18:14.489 "data_size": 7936 00:18:14.489 } 00:18:14.489 ] 00:18:14.489 } 00:18:14.489 } 00:18:14.489 }' 00:18:14.489 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:14.749 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:14.749 pt2' 00:18:14.749 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:14.749 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:14.749 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:14.749 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:14.749 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:14.749 03:20:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.749 03:20:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.749 03:20:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.749 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:14.749 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:14.749 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:14.749 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:14.749 03:20:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.749 03:20:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.749 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:14.749 03:20:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.749 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:14.749 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:14.749 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:14.749 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:14.749 03:20:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.749 03:20:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.749 [2024-10-09 03:20:57.956486] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:14.749 03:20:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.749 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=5e4af749-073f-465e-95a9-ee36b6caa92d 00:18:14.749 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z 5e4af749-073f-465e-95a9-ee36b6caa92d ']' 00:18:14.749 03:20:57 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:14.749 03:20:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.749 03:20:57 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.749 [2024-10-09 03:20:58.000219] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:14.749 [2024-10-09 03:20:58.000242] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:14.749 [2024-10-09 03:20:58.000306] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:14.749 [2024-10-09 03:20:58.000352] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:14.749 [2024-10-09 03:20:58.000364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:14.749 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.749 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:14.749 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.749 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.749 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:14.749 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.009 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:15.009 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:15.009 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:15.009 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:15.009 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.009 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.009 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.009 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:15.009 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:15.009 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.009 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.009 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.009 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:15.009 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:15.009 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.009 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.009 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.009 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:15.009 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:15.009 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # local es=0 00:18:15.009 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:15.009 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:15.009 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:15.009 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:15.009 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:15.009 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:15.009 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.009 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.009 [2024-10-09 03:20:58.139969] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:15.009 [2024-10-09 03:20:58.141980] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:15.009 [2024-10-09 03:20:58.142038] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:15.009 [2024-10-09 03:20:58.142079] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:15.009 [2024-10-09 03:20:58.142092] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:15.009 [2024-10-09 03:20:58.142101] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:15.009 request: 00:18:15.009 { 00:18:15.009 "name": "raid_bdev1", 00:18:15.009 "raid_level": "raid1", 00:18:15.009 "base_bdevs": [ 00:18:15.009 "malloc1", 00:18:15.009 "malloc2" 00:18:15.009 ], 00:18:15.009 "superblock": false, 00:18:15.009 "method": "bdev_raid_create", 00:18:15.009 "req_id": 1 00:18:15.009 } 00:18:15.009 Got JSON-RPC error response 00:18:15.009 response: 00:18:15.010 { 00:18:15.010 "code": -17, 00:18:15.010 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:15.010 } 00:18:15.010 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:15.010 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@653 -- # es=1 00:18:15.010 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:15.010 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:15.010 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:15.010 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.010 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.010 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.010 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:15.010 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.010 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:15.010 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:15.010 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:15.010 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.010 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.010 [2024-10-09 03:20:58.203940] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:15.010 [2024-10-09 03:20:58.204028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:15.010 [2024-10-09 03:20:58.204057] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:15.010 [2024-10-09 03:20:58.204082] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:15.010 [2024-10-09 03:20:58.206346] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:15.010 [2024-10-09 03:20:58.206421] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:15.010 [2024-10-09 03:20:58.206499] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:15.010 [2024-10-09 03:20:58.206583] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:15.010 pt1 00:18:15.010 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.010 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:15.010 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:15.010 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:15.010 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:15.010 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:15.010 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:15.010 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.010 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.010 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.010 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.010 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.010 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.010 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.010 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.010 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.010 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.010 "name": "raid_bdev1", 00:18:15.010 "uuid": "5e4af749-073f-465e-95a9-ee36b6caa92d", 00:18:15.010 "strip_size_kb": 0, 00:18:15.010 "state": "configuring", 00:18:15.010 "raid_level": "raid1", 00:18:15.010 "superblock": true, 00:18:15.010 "num_base_bdevs": 2, 00:18:15.010 "num_base_bdevs_discovered": 1, 00:18:15.010 "num_base_bdevs_operational": 2, 00:18:15.010 "base_bdevs_list": [ 00:18:15.010 { 00:18:15.010 "name": "pt1", 00:18:15.010 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:15.010 "is_configured": true, 00:18:15.010 "data_offset": 256, 00:18:15.010 "data_size": 7936 00:18:15.010 }, 00:18:15.010 { 00:18:15.010 "name": null, 00:18:15.010 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:15.010 "is_configured": false, 00:18:15.010 "data_offset": 256, 00:18:15.010 "data_size": 7936 00:18:15.010 } 00:18:15.010 ] 00:18:15.010 }' 00:18:15.010 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.010 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.580 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:15.580 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:15.580 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:15.580 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:15.580 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.580 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.580 [2024-10-09 03:20:58.627267] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:15.580 [2024-10-09 03:20:58.627358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:15.580 [2024-10-09 03:20:58.627390] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:15.580 [2024-10-09 03:20:58.627418] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:15.580 [2024-10-09 03:20:58.627778] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:15.580 [2024-10-09 03:20:58.627799] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:15.580 [2024-10-09 03:20:58.627867] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:15.580 [2024-10-09 03:20:58.627889] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:15.580 [2024-10-09 03:20:58.627992] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:15.580 [2024-10-09 03:20:58.628003] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:15.580 [2024-10-09 03:20:58.628226] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:15.580 [2024-10-09 03:20:58.628375] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:15.580 [2024-10-09 03:20:58.628391] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:15.580 [2024-10-09 03:20:58.628510] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:15.580 pt2 00:18:15.580 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.580 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:15.580 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:15.580 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:15.580 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:15.580 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:15.580 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:15.580 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:15.580 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:15.580 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.580 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.580 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.580 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.580 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:15.580 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.580 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.580 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.580 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.580 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.580 "name": "raid_bdev1", 00:18:15.580 "uuid": "5e4af749-073f-465e-95a9-ee36b6caa92d", 00:18:15.580 "strip_size_kb": 0, 00:18:15.580 "state": "online", 00:18:15.580 "raid_level": "raid1", 00:18:15.580 "superblock": true, 00:18:15.580 "num_base_bdevs": 2, 00:18:15.580 "num_base_bdevs_discovered": 2, 00:18:15.580 "num_base_bdevs_operational": 2, 00:18:15.580 "base_bdevs_list": [ 00:18:15.580 { 00:18:15.580 "name": "pt1", 00:18:15.580 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:15.580 "is_configured": true, 00:18:15.580 "data_offset": 256, 00:18:15.580 "data_size": 7936 00:18:15.580 }, 00:18:15.580 { 00:18:15.580 "name": "pt2", 00:18:15.580 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:15.580 "is_configured": true, 00:18:15.580 "data_offset": 256, 00:18:15.580 "data_size": 7936 00:18:15.580 } 00:18:15.580 ] 00:18:15.580 }' 00:18:15.580 03:20:58 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.580 03:20:58 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.840 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:15.840 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:15.840 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:15.840 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:15.840 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:18:15.840 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:15.840 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:15.840 03:20:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.840 03:20:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:15.840 [2024-10-09 03:20:59.022793] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:15.840 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:15.840 03:20:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.840 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:15.840 "name": "raid_bdev1", 00:18:15.840 "aliases": [ 00:18:15.840 "5e4af749-073f-465e-95a9-ee36b6caa92d" 00:18:15.840 ], 00:18:15.840 "product_name": "Raid Volume", 00:18:15.840 "block_size": 4096, 00:18:15.840 "num_blocks": 7936, 00:18:15.840 "uuid": "5e4af749-073f-465e-95a9-ee36b6caa92d", 00:18:15.840 "assigned_rate_limits": { 00:18:15.840 "rw_ios_per_sec": 0, 00:18:15.840 "rw_mbytes_per_sec": 0, 00:18:15.840 "r_mbytes_per_sec": 0, 00:18:15.840 "w_mbytes_per_sec": 0 00:18:15.840 }, 00:18:15.840 "claimed": false, 00:18:15.840 "zoned": false, 00:18:15.840 "supported_io_types": { 00:18:15.840 "read": true, 00:18:15.840 "write": true, 00:18:15.840 "unmap": false, 00:18:15.840 "flush": false, 00:18:15.840 "reset": true, 00:18:15.840 "nvme_admin": false, 00:18:15.840 "nvme_io": false, 00:18:15.840 "nvme_io_md": false, 00:18:15.840 "write_zeroes": true, 00:18:15.840 "zcopy": false, 00:18:15.840 "get_zone_info": false, 00:18:15.840 "zone_management": false, 00:18:15.840 "zone_append": false, 00:18:15.840 "compare": false, 00:18:15.840 "compare_and_write": false, 00:18:15.840 "abort": false, 00:18:15.840 "seek_hole": false, 00:18:15.840 "seek_data": false, 00:18:15.840 "copy": false, 00:18:15.840 "nvme_iov_md": false 00:18:15.840 }, 00:18:15.840 "memory_domains": [ 00:18:15.840 { 00:18:15.840 "dma_device_id": "system", 00:18:15.840 "dma_device_type": 1 00:18:15.840 }, 00:18:15.840 { 00:18:15.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.840 "dma_device_type": 2 00:18:15.840 }, 00:18:15.841 { 00:18:15.841 "dma_device_id": "system", 00:18:15.841 "dma_device_type": 1 00:18:15.841 }, 00:18:15.841 { 00:18:15.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.841 "dma_device_type": 2 00:18:15.841 } 00:18:15.841 ], 00:18:15.841 "driver_specific": { 00:18:15.841 "raid": { 00:18:15.841 "uuid": "5e4af749-073f-465e-95a9-ee36b6caa92d", 00:18:15.841 "strip_size_kb": 0, 00:18:15.841 "state": "online", 00:18:15.841 "raid_level": "raid1", 00:18:15.841 "superblock": true, 00:18:15.841 "num_base_bdevs": 2, 00:18:15.841 "num_base_bdevs_discovered": 2, 00:18:15.841 "num_base_bdevs_operational": 2, 00:18:15.841 "base_bdevs_list": [ 00:18:15.841 { 00:18:15.841 "name": "pt1", 00:18:15.841 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:15.841 "is_configured": true, 00:18:15.841 "data_offset": 256, 00:18:15.841 "data_size": 7936 00:18:15.841 }, 00:18:15.841 { 00:18:15.841 "name": "pt2", 00:18:15.841 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:15.841 "is_configured": true, 00:18:15.841 "data_offset": 256, 00:18:15.841 "data_size": 7936 00:18:15.841 } 00:18:15.841 ] 00:18:15.841 } 00:18:15.841 } 00:18:15.841 }' 00:18:15.841 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:15.841 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:15.841 pt2' 00:18:15.841 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:15.841 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:18:15.841 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.101 [2024-10-09 03:20:59.234425] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' 5e4af749-073f-465e-95a9-ee36b6caa92d '!=' 5e4af749-073f-465e-95a9-ee36b6caa92d ']' 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.101 [2024-10-09 03:20:59.294148] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.101 "name": "raid_bdev1", 00:18:16.101 "uuid": "5e4af749-073f-465e-95a9-ee36b6caa92d", 00:18:16.101 "strip_size_kb": 0, 00:18:16.101 "state": "online", 00:18:16.101 "raid_level": "raid1", 00:18:16.101 "superblock": true, 00:18:16.101 "num_base_bdevs": 2, 00:18:16.101 "num_base_bdevs_discovered": 1, 00:18:16.101 "num_base_bdevs_operational": 1, 00:18:16.101 "base_bdevs_list": [ 00:18:16.101 { 00:18:16.101 "name": null, 00:18:16.101 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.101 "is_configured": false, 00:18:16.101 "data_offset": 0, 00:18:16.101 "data_size": 7936 00:18:16.101 }, 00:18:16.101 { 00:18:16.101 "name": "pt2", 00:18:16.101 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:16.101 "is_configured": true, 00:18:16.101 "data_offset": 256, 00:18:16.101 "data_size": 7936 00:18:16.101 } 00:18:16.101 ] 00:18:16.101 }' 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.101 03:20:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.672 [2024-10-09 03:20:59.745339] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:16.672 [2024-10-09 03:20:59.745402] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:16.672 [2024-10-09 03:20:59.745466] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:16.672 [2024-10-09 03:20:59.745509] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:16.672 [2024-10-09 03:20:59.745540] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.672 [2024-10-09 03:20:59.817234] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:16.672 [2024-10-09 03:20:59.817278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:16.672 [2024-10-09 03:20:59.817290] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:16.672 [2024-10-09 03:20:59.817300] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:16.672 [2024-10-09 03:20:59.819520] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:16.672 [2024-10-09 03:20:59.819555] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:16.672 [2024-10-09 03:20:59.819609] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:16.672 [2024-10-09 03:20:59.819649] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:16.672 [2024-10-09 03:20:59.819721] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:16.672 [2024-10-09 03:20:59.819732] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:16.672 [2024-10-09 03:20:59.819940] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:16.672 [2024-10-09 03:20:59.820090] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:16.672 [2024-10-09 03:20:59.820099] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:16.672 [2024-10-09 03:20:59.820208] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:16.672 pt2 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.672 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.672 "name": "raid_bdev1", 00:18:16.672 "uuid": "5e4af749-073f-465e-95a9-ee36b6caa92d", 00:18:16.672 "strip_size_kb": 0, 00:18:16.672 "state": "online", 00:18:16.672 "raid_level": "raid1", 00:18:16.672 "superblock": true, 00:18:16.672 "num_base_bdevs": 2, 00:18:16.673 "num_base_bdevs_discovered": 1, 00:18:16.673 "num_base_bdevs_operational": 1, 00:18:16.673 "base_bdevs_list": [ 00:18:16.673 { 00:18:16.673 "name": null, 00:18:16.673 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.673 "is_configured": false, 00:18:16.673 "data_offset": 256, 00:18:16.673 "data_size": 7936 00:18:16.673 }, 00:18:16.673 { 00:18:16.673 "name": "pt2", 00:18:16.673 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:16.673 "is_configured": true, 00:18:16.673 "data_offset": 256, 00:18:16.673 "data_size": 7936 00:18:16.673 } 00:18:16.673 ] 00:18:16.673 }' 00:18:16.673 03:20:59 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.673 03:20:59 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.243 03:21:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:17.243 03:21:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.243 03:21:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.243 [2024-10-09 03:21:00.292642] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:17.243 [2024-10-09 03:21:00.292709] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:17.243 [2024-10-09 03:21:00.292775] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:17.243 [2024-10-09 03:21:00.292823] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:17.243 [2024-10-09 03:21:00.292869] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:17.243 03:21:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.243 03:21:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.243 03:21:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:17.243 03:21:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.243 03:21:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.243 03:21:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.243 03:21:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:17.243 03:21:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:17.243 03:21:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:17.243 03:21:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:17.243 03:21:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.243 03:21:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.243 [2024-10-09 03:21:00.352561] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:17.243 [2024-10-09 03:21:00.352637] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:17.243 [2024-10-09 03:21:00.352665] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:17.243 [2024-10-09 03:21:00.352687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:17.243 [2024-10-09 03:21:00.354934] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:17.243 [2024-10-09 03:21:00.354999] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:17.243 [2024-10-09 03:21:00.355081] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:17.243 [2024-10-09 03:21:00.355138] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:17.243 [2024-10-09 03:21:00.355272] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:17.243 [2024-10-09 03:21:00.355328] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:17.243 [2024-10-09 03:21:00.355364] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:17.243 [2024-10-09 03:21:00.355428] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:17.243 [2024-10-09 03:21:00.355496] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:17.243 [2024-10-09 03:21:00.355504] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:17.243 [2024-10-09 03:21:00.355704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:17.243 [2024-10-09 03:21:00.355821] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:17.243 [2024-10-09 03:21:00.355832] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:17.243 [2024-10-09 03:21:00.355978] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:17.243 pt1 00:18:17.243 03:21:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.243 03:21:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:17.243 03:21:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:17.243 03:21:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:17.243 03:21:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:17.243 03:21:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:17.243 03:21:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:17.243 03:21:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:17.243 03:21:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.243 03:21:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.243 03:21:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.243 03:21:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.243 03:21:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.243 03:21:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.243 03:21:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.243 03:21:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.243 03:21:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.243 03:21:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.243 "name": "raid_bdev1", 00:18:17.243 "uuid": "5e4af749-073f-465e-95a9-ee36b6caa92d", 00:18:17.243 "strip_size_kb": 0, 00:18:17.243 "state": "online", 00:18:17.243 "raid_level": "raid1", 00:18:17.243 "superblock": true, 00:18:17.243 "num_base_bdevs": 2, 00:18:17.243 "num_base_bdevs_discovered": 1, 00:18:17.243 "num_base_bdevs_operational": 1, 00:18:17.243 "base_bdevs_list": [ 00:18:17.243 { 00:18:17.243 "name": null, 00:18:17.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.243 "is_configured": false, 00:18:17.243 "data_offset": 256, 00:18:17.243 "data_size": 7936 00:18:17.243 }, 00:18:17.243 { 00:18:17.243 "name": "pt2", 00:18:17.243 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:17.243 "is_configured": true, 00:18:17.243 "data_offset": 256, 00:18:17.243 "data_size": 7936 00:18:17.243 } 00:18:17.243 ] 00:18:17.243 }' 00:18:17.243 03:21:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.243 03:21:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.503 03:21:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:17.503 03:21:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:17.503 03:21:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.503 03:21:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.503 03:21:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.764 03:21:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:17.764 03:21:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:17.764 03:21:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.764 03:21:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:17.764 03:21:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:17.764 [2024-10-09 03:21:00.831955] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:17.764 03:21:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.764 03:21:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' 5e4af749-073f-465e-95a9-ee36b6caa92d '!=' 5e4af749-073f-465e-95a9-ee36b6caa92d ']' 00:18:17.764 03:21:00 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86462 00:18:17.764 03:21:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@950 -- # '[' -z 86462 ']' 00:18:17.764 03:21:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # kill -0 86462 00:18:17.764 03:21:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # uname 00:18:17.764 03:21:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:17.764 03:21:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86462 00:18:17.764 killing process with pid 86462 00:18:17.764 03:21:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:17.764 03:21:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:17.764 03:21:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86462' 00:18:17.764 03:21:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@969 -- # kill 86462 00:18:17.764 [2024-10-09 03:21:00.916084] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:17.764 [2024-10-09 03:21:00.916138] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:17.764 [2024-10-09 03:21:00.916166] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:17.764 [2024-10-09 03:21:00.916180] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:17.764 03:21:00 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@974 -- # wait 86462 00:18:18.024 [2024-10-09 03:21:01.127201] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:19.404 ************************************ 00:18:19.404 END TEST raid_superblock_test_4k 00:18:19.405 ************************************ 00:18:19.405 03:21:02 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:18:19.405 00:18:19.405 real 0m6.234s 00:18:19.405 user 0m9.111s 00:18:19.405 sys 0m1.234s 00:18:19.405 03:21:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:19.405 03:21:02 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.405 03:21:02 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:18:19.405 03:21:02 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:18:19.405 03:21:02 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:18:19.405 03:21:02 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:19.405 03:21:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:19.405 ************************************ 00:18:19.405 START TEST raid_rebuild_test_sb_4k 00:18:19.405 ************************************ 00:18:19.405 03:21:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:18:19.405 03:21:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:19.405 03:21:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:19.405 03:21:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:19.405 03:21:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:19.405 03:21:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:19.405 03:21:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:19.405 03:21:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:19.405 03:21:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:19.405 03:21:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:19.405 03:21:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:19.405 03:21:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:19.405 03:21:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:19.405 03:21:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:19.405 03:21:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:19.405 03:21:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:19.405 03:21:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:19.405 03:21:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:19.405 03:21:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:19.405 03:21:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:19.405 03:21:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:19.405 03:21:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:19.405 03:21:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:19.405 03:21:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:19.405 03:21:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:19.405 03:21:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86788 00:18:19.405 03:21:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86788 00:18:19.405 03:21:02 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:19.405 03:21:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@831 -- # '[' -z 86788 ']' 00:18:19.405 03:21:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.405 03:21:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:19.405 03:21:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.405 03:21:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:19.405 03:21:02 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:19.405 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:19.405 Zero copy mechanism will not be used. 00:18:19.405 [2024-10-09 03:21:02.631142] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:18:19.405 [2024-10-09 03:21:02.631267] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86788 ] 00:18:19.664 [2024-10-09 03:21:02.797998] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.924 [2024-10-09 03:21:03.038821] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.184 [2024-10-09 03:21:03.237609] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:20.184 [2024-10-09 03:21:03.237646] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:20.184 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:20.184 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # return 0 00:18:20.184 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:20.184 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:18:20.184 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.184 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.184 BaseBdev1_malloc 00:18:20.184 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.184 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:20.184 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.184 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.444 [2024-10-09 03:21:03.489777] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:20.444 [2024-10-09 03:21:03.489948] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.444 [2024-10-09 03:21:03.489982] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:20.444 [2024-10-09 03:21:03.489998] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.444 [2024-10-09 03:21:03.492319] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.444 [2024-10-09 03:21:03.492357] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:20.444 BaseBdev1 00:18:20.444 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.444 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:20.444 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:18:20.444 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.444 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.444 BaseBdev2_malloc 00:18:20.444 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.444 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:20.444 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.444 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.444 [2024-10-09 03:21:03.556658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:20.444 [2024-10-09 03:21:03.556796] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.444 [2024-10-09 03:21:03.556822] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:20.444 [2024-10-09 03:21:03.556848] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.444 [2024-10-09 03:21:03.559112] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.444 [2024-10-09 03:21:03.559151] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:20.444 BaseBdev2 00:18:20.444 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.444 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:18:20.444 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.444 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.444 spare_malloc 00:18:20.444 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.444 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:20.444 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.444 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.444 spare_delay 00:18:20.444 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.444 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:20.444 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.444 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.444 [2024-10-09 03:21:03.624433] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:20.444 [2024-10-09 03:21:03.624488] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:20.444 [2024-10-09 03:21:03.624507] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:20.444 [2024-10-09 03:21:03.624518] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:20.444 [2024-10-09 03:21:03.626790] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:20.444 [2024-10-09 03:21:03.626829] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:20.444 spare 00:18:20.444 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.444 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:20.444 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.444 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.444 [2024-10-09 03:21:03.636465] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:20.444 [2024-10-09 03:21:03.638457] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:20.444 [2024-10-09 03:21:03.638717] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:20.444 [2024-10-09 03:21:03.638737] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:20.444 [2024-10-09 03:21:03.638996] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:20.444 [2024-10-09 03:21:03.639166] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:20.444 [2024-10-09 03:21:03.639175] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:20.445 [2024-10-09 03:21:03.639318] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:20.445 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.445 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:20.445 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:20.445 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:20.445 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:20.445 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:20.445 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:20.445 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:20.445 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:20.445 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:20.445 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:20.445 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:20.445 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:20.445 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.445 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:20.445 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.445 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:20.445 "name": "raid_bdev1", 00:18:20.445 "uuid": "ada6e14b-62ea-469a-a6da-8622aff995f9", 00:18:20.445 "strip_size_kb": 0, 00:18:20.445 "state": "online", 00:18:20.445 "raid_level": "raid1", 00:18:20.445 "superblock": true, 00:18:20.445 "num_base_bdevs": 2, 00:18:20.445 "num_base_bdevs_discovered": 2, 00:18:20.445 "num_base_bdevs_operational": 2, 00:18:20.445 "base_bdevs_list": [ 00:18:20.445 { 00:18:20.445 "name": "BaseBdev1", 00:18:20.445 "uuid": "9026256f-451a-52d0-959a-8a9e169ce436", 00:18:20.445 "is_configured": true, 00:18:20.445 "data_offset": 256, 00:18:20.445 "data_size": 7936 00:18:20.445 }, 00:18:20.445 { 00:18:20.445 "name": "BaseBdev2", 00:18:20.445 "uuid": "6375c41b-603f-59ee-a540-0ca38e599b57", 00:18:20.445 "is_configured": true, 00:18:20.445 "data_offset": 256, 00:18:20.445 "data_size": 7936 00:18:20.445 } 00:18:20.445 ] 00:18:20.445 }' 00:18:20.445 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:20.445 03:21:03 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.013 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:21.013 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.013 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.013 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:21.013 [2024-10-09 03:21:04.076123] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:21.013 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.013 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:21.013 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:21.013 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:21.013 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.013 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:21.013 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.013 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:21.013 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:21.013 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:21.013 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:21.013 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:21.013 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:21.013 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:21.013 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:21.013 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:21.013 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:21.013 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:18:21.013 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:21.013 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:21.013 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:21.272 [2024-10-09 03:21:04.343425] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:21.272 /dev/nbd0 00:18:21.272 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:21.272 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:21.272 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:21.272 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:18:21.272 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:21.272 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:21.272 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:21.272 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:18:21.272 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:21.272 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:21.273 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:21.273 1+0 records in 00:18:21.273 1+0 records out 00:18:21.273 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000468607 s, 8.7 MB/s 00:18:21.273 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:21.273 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:18:21.273 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:21.273 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:21.273 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:18:21.273 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:21.273 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:21.273 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:21.273 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:21.273 03:21:04 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:21.841 7936+0 records in 00:18:21.841 7936+0 records out 00:18:21.841 32505856 bytes (33 MB, 31 MiB) copied, 0.603521 s, 53.9 MB/s 00:18:21.841 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:21.841 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:21.841 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:21.841 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:21.841 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:18:21.841 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:21.841 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:22.101 [2024-10-09 03:21:05.233965] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:22.101 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:22.101 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:22.101 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:22.101 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:22.101 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:22.101 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:22.101 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:22.101 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:22.101 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:22.101 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.101 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:22.102 [2024-10-09 03:21:05.266341] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:22.102 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.102 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:22.102 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:22.102 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:22.102 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:22.102 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:22.102 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:22.102 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.102 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.102 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.102 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.102 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.102 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.102 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:22.102 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:22.102 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.102 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.102 "name": "raid_bdev1", 00:18:22.102 "uuid": "ada6e14b-62ea-469a-a6da-8622aff995f9", 00:18:22.102 "strip_size_kb": 0, 00:18:22.102 "state": "online", 00:18:22.102 "raid_level": "raid1", 00:18:22.102 "superblock": true, 00:18:22.102 "num_base_bdevs": 2, 00:18:22.102 "num_base_bdevs_discovered": 1, 00:18:22.102 "num_base_bdevs_operational": 1, 00:18:22.102 "base_bdevs_list": [ 00:18:22.102 { 00:18:22.102 "name": null, 00:18:22.102 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.102 "is_configured": false, 00:18:22.102 "data_offset": 0, 00:18:22.102 "data_size": 7936 00:18:22.102 }, 00:18:22.102 { 00:18:22.102 "name": "BaseBdev2", 00:18:22.102 "uuid": "6375c41b-603f-59ee-a540-0ca38e599b57", 00:18:22.102 "is_configured": true, 00:18:22.102 "data_offset": 256, 00:18:22.102 "data_size": 7936 00:18:22.102 } 00:18:22.102 ] 00:18:22.102 }' 00:18:22.102 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.102 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:22.670 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:22.670 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.670 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:22.670 [2024-10-09 03:21:05.701906] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:22.670 [2024-10-09 03:21:05.715761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:22.670 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.670 03:21:05 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:22.670 [2024-10-09 03:21:05.717792] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:23.610 03:21:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:23.610 03:21:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:23.610 03:21:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:23.610 03:21:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:23.610 03:21:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:23.610 03:21:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.610 03:21:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.610 03:21:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.610 03:21:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.610 03:21:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.610 03:21:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:23.610 "name": "raid_bdev1", 00:18:23.610 "uuid": "ada6e14b-62ea-469a-a6da-8622aff995f9", 00:18:23.610 "strip_size_kb": 0, 00:18:23.610 "state": "online", 00:18:23.610 "raid_level": "raid1", 00:18:23.610 "superblock": true, 00:18:23.610 "num_base_bdevs": 2, 00:18:23.610 "num_base_bdevs_discovered": 2, 00:18:23.610 "num_base_bdevs_operational": 2, 00:18:23.610 "process": { 00:18:23.610 "type": "rebuild", 00:18:23.610 "target": "spare", 00:18:23.610 "progress": { 00:18:23.610 "blocks": 2560, 00:18:23.610 "percent": 32 00:18:23.610 } 00:18:23.610 }, 00:18:23.610 "base_bdevs_list": [ 00:18:23.610 { 00:18:23.610 "name": "spare", 00:18:23.610 "uuid": "1926dd4b-f20f-5a34-b395-8e40593b7f02", 00:18:23.610 "is_configured": true, 00:18:23.610 "data_offset": 256, 00:18:23.610 "data_size": 7936 00:18:23.610 }, 00:18:23.610 { 00:18:23.610 "name": "BaseBdev2", 00:18:23.610 "uuid": "6375c41b-603f-59ee-a540-0ca38e599b57", 00:18:23.610 "is_configured": true, 00:18:23.610 "data_offset": 256, 00:18:23.610 "data_size": 7936 00:18:23.610 } 00:18:23.610 ] 00:18:23.610 }' 00:18:23.610 03:21:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:23.610 03:21:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:23.610 03:21:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:23.610 03:21:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:23.610 03:21:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:23.611 03:21:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.611 03:21:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.611 [2024-10-09 03:21:06.877723] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:23.870 [2024-10-09 03:21:06.926477] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:23.870 [2024-10-09 03:21:06.926534] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:23.870 [2024-10-09 03:21:06.926549] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:23.870 [2024-10-09 03:21:06.926559] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:23.870 03:21:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.870 03:21:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:23.870 03:21:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:23.870 03:21:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:23.870 03:21:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:23.870 03:21:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:23.870 03:21:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:23.870 03:21:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.870 03:21:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.870 03:21:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.870 03:21:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.870 03:21:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.870 03:21:06 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:23.870 03:21:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.870 03:21:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:23.870 03:21:06 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.870 03:21:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.870 "name": "raid_bdev1", 00:18:23.870 "uuid": "ada6e14b-62ea-469a-a6da-8622aff995f9", 00:18:23.870 "strip_size_kb": 0, 00:18:23.870 "state": "online", 00:18:23.870 "raid_level": "raid1", 00:18:23.870 "superblock": true, 00:18:23.870 "num_base_bdevs": 2, 00:18:23.870 "num_base_bdevs_discovered": 1, 00:18:23.870 "num_base_bdevs_operational": 1, 00:18:23.870 "base_bdevs_list": [ 00:18:23.870 { 00:18:23.870 "name": null, 00:18:23.870 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.870 "is_configured": false, 00:18:23.871 "data_offset": 0, 00:18:23.871 "data_size": 7936 00:18:23.871 }, 00:18:23.871 { 00:18:23.871 "name": "BaseBdev2", 00:18:23.871 "uuid": "6375c41b-603f-59ee-a540-0ca38e599b57", 00:18:23.871 "is_configured": true, 00:18:23.871 "data_offset": 256, 00:18:23.871 "data_size": 7936 00:18:23.871 } 00:18:23.871 ] 00:18:23.871 }' 00:18:23.871 03:21:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.871 03:21:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.130 03:21:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:24.130 03:21:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:24.130 03:21:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:24.130 03:21:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:24.130 03:21:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:24.130 03:21:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.130 03:21:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.130 03:21:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.130 03:21:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:24.399 03:21:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.399 03:21:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:24.399 "name": "raid_bdev1", 00:18:24.399 "uuid": "ada6e14b-62ea-469a-a6da-8622aff995f9", 00:18:24.399 "strip_size_kb": 0, 00:18:24.399 "state": "online", 00:18:24.399 "raid_level": "raid1", 00:18:24.399 "superblock": true, 00:18:24.399 "num_base_bdevs": 2, 00:18:24.399 "num_base_bdevs_discovered": 1, 00:18:24.399 "num_base_bdevs_operational": 1, 00:18:24.399 "base_bdevs_list": [ 00:18:24.399 { 00:18:24.399 "name": null, 00:18:24.399 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.399 "is_configured": false, 00:18:24.399 "data_offset": 0, 00:18:24.399 "data_size": 7936 00:18:24.399 }, 00:18:24.399 { 00:18:24.399 "name": "BaseBdev2", 00:18:24.399 "uuid": "6375c41b-603f-59ee-a540-0ca38e599b57", 00:18:24.399 "is_configured": true, 00:18:24.399 "data_offset": 256, 00:18:24.399 "data_size": 7936 00:18:24.399 } 00:18:24.399 ] 00:18:24.399 }' 00:18:24.399 03:21:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:24.399 03:21:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:24.399 03:21:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:24.399 03:21:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:24.399 03:21:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:24.399 03:21:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.399 03:21:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:24.399 [2024-10-09 03:21:07.573289] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:24.399 [2024-10-09 03:21:07.588155] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:24.399 03:21:07 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.399 03:21:07 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:24.399 [2024-10-09 03:21:07.590214] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:25.373 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:25.373 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:25.373 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:25.373 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:25.373 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:25.373 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.373 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.373 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:25.373 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.373 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.373 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:25.373 "name": "raid_bdev1", 00:18:25.373 "uuid": "ada6e14b-62ea-469a-a6da-8622aff995f9", 00:18:25.373 "strip_size_kb": 0, 00:18:25.373 "state": "online", 00:18:25.373 "raid_level": "raid1", 00:18:25.373 "superblock": true, 00:18:25.373 "num_base_bdevs": 2, 00:18:25.373 "num_base_bdevs_discovered": 2, 00:18:25.373 "num_base_bdevs_operational": 2, 00:18:25.373 "process": { 00:18:25.373 "type": "rebuild", 00:18:25.373 "target": "spare", 00:18:25.373 "progress": { 00:18:25.373 "blocks": 2560, 00:18:25.373 "percent": 32 00:18:25.373 } 00:18:25.373 }, 00:18:25.373 "base_bdevs_list": [ 00:18:25.373 { 00:18:25.373 "name": "spare", 00:18:25.373 "uuid": "1926dd4b-f20f-5a34-b395-8e40593b7f02", 00:18:25.373 "is_configured": true, 00:18:25.373 "data_offset": 256, 00:18:25.373 "data_size": 7936 00:18:25.373 }, 00:18:25.373 { 00:18:25.373 "name": "BaseBdev2", 00:18:25.373 "uuid": "6375c41b-603f-59ee-a540-0ca38e599b57", 00:18:25.373 "is_configured": true, 00:18:25.373 "data_offset": 256, 00:18:25.373 "data_size": 7936 00:18:25.373 } 00:18:25.373 ] 00:18:25.373 }' 00:18:25.373 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:25.633 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:25.633 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:25.633 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:25.633 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:25.633 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:25.633 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:25.633 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:25.633 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:25.633 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:25.633 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=695 00:18:25.633 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:25.633 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:25.633 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:25.633 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:25.633 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:25.633 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:25.633 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.633 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:25.633 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.633 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:25.633 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.633 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:25.633 "name": "raid_bdev1", 00:18:25.633 "uuid": "ada6e14b-62ea-469a-a6da-8622aff995f9", 00:18:25.633 "strip_size_kb": 0, 00:18:25.633 "state": "online", 00:18:25.633 "raid_level": "raid1", 00:18:25.633 "superblock": true, 00:18:25.633 "num_base_bdevs": 2, 00:18:25.633 "num_base_bdevs_discovered": 2, 00:18:25.633 "num_base_bdevs_operational": 2, 00:18:25.633 "process": { 00:18:25.633 "type": "rebuild", 00:18:25.633 "target": "spare", 00:18:25.633 "progress": { 00:18:25.633 "blocks": 2816, 00:18:25.633 "percent": 35 00:18:25.633 } 00:18:25.633 }, 00:18:25.633 "base_bdevs_list": [ 00:18:25.633 { 00:18:25.633 "name": "spare", 00:18:25.633 "uuid": "1926dd4b-f20f-5a34-b395-8e40593b7f02", 00:18:25.633 "is_configured": true, 00:18:25.633 "data_offset": 256, 00:18:25.633 "data_size": 7936 00:18:25.633 }, 00:18:25.633 { 00:18:25.633 "name": "BaseBdev2", 00:18:25.633 "uuid": "6375c41b-603f-59ee-a540-0ca38e599b57", 00:18:25.633 "is_configured": true, 00:18:25.633 "data_offset": 256, 00:18:25.633 "data_size": 7936 00:18:25.633 } 00:18:25.633 ] 00:18:25.633 }' 00:18:25.633 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:25.633 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:25.633 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:25.633 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:25.633 03:21:08 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:27.016 03:21:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:27.016 03:21:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:27.016 03:21:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:27.016 03:21:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:27.016 03:21:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:27.016 03:21:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:27.016 03:21:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.016 03:21:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.016 03:21:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.016 03:21:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:27.016 03:21:09 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.016 03:21:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:27.016 "name": "raid_bdev1", 00:18:27.016 "uuid": "ada6e14b-62ea-469a-a6da-8622aff995f9", 00:18:27.016 "strip_size_kb": 0, 00:18:27.016 "state": "online", 00:18:27.016 "raid_level": "raid1", 00:18:27.016 "superblock": true, 00:18:27.016 "num_base_bdevs": 2, 00:18:27.016 "num_base_bdevs_discovered": 2, 00:18:27.016 "num_base_bdevs_operational": 2, 00:18:27.016 "process": { 00:18:27.016 "type": "rebuild", 00:18:27.016 "target": "spare", 00:18:27.016 "progress": { 00:18:27.016 "blocks": 5888, 00:18:27.016 "percent": 74 00:18:27.016 } 00:18:27.016 }, 00:18:27.016 "base_bdevs_list": [ 00:18:27.016 { 00:18:27.016 "name": "spare", 00:18:27.016 "uuid": "1926dd4b-f20f-5a34-b395-8e40593b7f02", 00:18:27.016 "is_configured": true, 00:18:27.016 "data_offset": 256, 00:18:27.016 "data_size": 7936 00:18:27.016 }, 00:18:27.016 { 00:18:27.016 "name": "BaseBdev2", 00:18:27.016 "uuid": "6375c41b-603f-59ee-a540-0ca38e599b57", 00:18:27.016 "is_configured": true, 00:18:27.016 "data_offset": 256, 00:18:27.016 "data_size": 7936 00:18:27.016 } 00:18:27.016 ] 00:18:27.016 }' 00:18:27.016 03:21:09 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:27.016 03:21:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:27.016 03:21:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:27.016 03:21:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:27.016 03:21:10 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:27.589 [2024-10-09 03:21:10.710970] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:27.589 [2024-10-09 03:21:10.711124] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:27.589 [2024-10-09 03:21:10.711263] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:27.848 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:27.848 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:27.848 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:27.848 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:27.848 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:27.848 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:27.848 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.848 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:27.848 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.848 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:27.848 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.848 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:27.848 "name": "raid_bdev1", 00:18:27.848 "uuid": "ada6e14b-62ea-469a-a6da-8622aff995f9", 00:18:27.848 "strip_size_kb": 0, 00:18:27.848 "state": "online", 00:18:27.848 "raid_level": "raid1", 00:18:27.848 "superblock": true, 00:18:27.848 "num_base_bdevs": 2, 00:18:27.848 "num_base_bdevs_discovered": 2, 00:18:27.848 "num_base_bdevs_operational": 2, 00:18:27.848 "base_bdevs_list": [ 00:18:27.848 { 00:18:27.848 "name": "spare", 00:18:27.848 "uuid": "1926dd4b-f20f-5a34-b395-8e40593b7f02", 00:18:27.848 "is_configured": true, 00:18:27.848 "data_offset": 256, 00:18:27.848 "data_size": 7936 00:18:27.848 }, 00:18:27.848 { 00:18:27.848 "name": "BaseBdev2", 00:18:27.848 "uuid": "6375c41b-603f-59ee-a540-0ca38e599b57", 00:18:27.848 "is_configured": true, 00:18:27.848 "data_offset": 256, 00:18:27.848 "data_size": 7936 00:18:27.848 } 00:18:27.848 ] 00:18:27.848 }' 00:18:27.848 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.107 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:28.107 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:28.107 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:28.107 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:18:28.107 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:28.107 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:28.107 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:28.107 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:28.107 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:28.107 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.107 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.107 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.107 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:28.107 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.107 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:28.107 "name": "raid_bdev1", 00:18:28.107 "uuid": "ada6e14b-62ea-469a-a6da-8622aff995f9", 00:18:28.107 "strip_size_kb": 0, 00:18:28.107 "state": "online", 00:18:28.108 "raid_level": "raid1", 00:18:28.108 "superblock": true, 00:18:28.108 "num_base_bdevs": 2, 00:18:28.108 "num_base_bdevs_discovered": 2, 00:18:28.108 "num_base_bdevs_operational": 2, 00:18:28.108 "base_bdevs_list": [ 00:18:28.108 { 00:18:28.108 "name": "spare", 00:18:28.108 "uuid": "1926dd4b-f20f-5a34-b395-8e40593b7f02", 00:18:28.108 "is_configured": true, 00:18:28.108 "data_offset": 256, 00:18:28.108 "data_size": 7936 00:18:28.108 }, 00:18:28.108 { 00:18:28.108 "name": "BaseBdev2", 00:18:28.108 "uuid": "6375c41b-603f-59ee-a540-0ca38e599b57", 00:18:28.108 "is_configured": true, 00:18:28.108 "data_offset": 256, 00:18:28.108 "data_size": 7936 00:18:28.108 } 00:18:28.108 ] 00:18:28.108 }' 00:18:28.108 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:28.108 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:28.108 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:28.108 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:28.108 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:28.108 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:28.108 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:28.108 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:28.108 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:28.108 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:28.108 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.108 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.108 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.108 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.108 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.108 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:28.108 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.108 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:28.108 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.108 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.108 "name": "raid_bdev1", 00:18:28.108 "uuid": "ada6e14b-62ea-469a-a6da-8622aff995f9", 00:18:28.108 "strip_size_kb": 0, 00:18:28.108 "state": "online", 00:18:28.108 "raid_level": "raid1", 00:18:28.108 "superblock": true, 00:18:28.108 "num_base_bdevs": 2, 00:18:28.108 "num_base_bdevs_discovered": 2, 00:18:28.108 "num_base_bdevs_operational": 2, 00:18:28.108 "base_bdevs_list": [ 00:18:28.108 { 00:18:28.108 "name": "spare", 00:18:28.108 "uuid": "1926dd4b-f20f-5a34-b395-8e40593b7f02", 00:18:28.108 "is_configured": true, 00:18:28.108 "data_offset": 256, 00:18:28.108 "data_size": 7936 00:18:28.108 }, 00:18:28.108 { 00:18:28.108 "name": "BaseBdev2", 00:18:28.108 "uuid": "6375c41b-603f-59ee-a540-0ca38e599b57", 00:18:28.108 "is_configured": true, 00:18:28.108 "data_offset": 256, 00:18:28.108 "data_size": 7936 00:18:28.108 } 00:18:28.108 ] 00:18:28.108 }' 00:18:28.108 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.108 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:28.676 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:28.676 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.676 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:28.676 [2024-10-09 03:21:11.812951] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:28.676 [2024-10-09 03:21:11.813026] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:28.676 [2024-10-09 03:21:11.813120] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:28.676 [2024-10-09 03:21:11.813197] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:28.676 [2024-10-09 03:21:11.813238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:28.676 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.676 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.676 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:18:28.676 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.676 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:28.676 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.676 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:18:28.676 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:18:28.676 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:18:28.676 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:18:28.676 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:28.676 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:18:28.676 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:28.676 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:28.676 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:28.676 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:18:28.676 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:28.676 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:28.676 03:21:11 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:18:28.935 /dev/nbd0 00:18:28.935 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:28.935 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:28.935 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:28.935 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:18:28.935 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:28.935 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:28.935 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:28.935 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:18:28.935 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:28.935 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:28.935 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:28.935 1+0 records in 00:18:28.935 1+0 records out 00:18:28.935 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000250762 s, 16.3 MB/s 00:18:28.935 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:28.935 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:18:28.935 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:28.935 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:28.935 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:18:28.935 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:28.935 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:28.936 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:18:29.195 /dev/nbd1 00:18:29.195 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:18:29.195 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:18:29.195 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:18:29.195 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@869 -- # local i 00:18:29.195 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:29.195 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:29.195 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:18:29.195 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # break 00:18:29.195 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:29.195 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:29.195 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:29.195 1+0 records in 00:18:29.195 1+0 records out 00:18:29.195 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000445925 s, 9.2 MB/s 00:18:29.195 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:29.195 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@886 -- # size=4096 00:18:29.195 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:29.195 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:29.195 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # return 0 00:18:29.195 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:29.195 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:18:29.195 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:18:29.453 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:18:29.453 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:29.453 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:18:29.453 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:29.453 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:18:29.453 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:29.453 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:29.453 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:29.712 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:29.712 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:29.712 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:29.712 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:29.712 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:29.712 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:29.712 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:29.712 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:29.713 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:18:29.713 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:18:29.713 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:18:29.713 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:18:29.713 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:29.713 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:29.713 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:18:29.713 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:18:29.713 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:18:29.713 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:18:29.713 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:18:29.713 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.713 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:29.713 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.713 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:29.713 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.713 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:29.713 [2024-10-09 03:21:12.992736] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:29.713 [2024-10-09 03:21:12.992862] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:29.713 [2024-10-09 03:21:12.992892] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:29.713 [2024-10-09 03:21:12.992901] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:29.713 [2024-10-09 03:21:12.995214] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:29.713 [2024-10-09 03:21:12.995285] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:29.713 [2024-10-09 03:21:12.995397] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:29.713 [2024-10-09 03:21:12.995490] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:29.713 [2024-10-09 03:21:12.995675] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:29.713 spare 00:18:29.713 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.713 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:18:29.713 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.713 03:21:12 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:29.972 [2024-10-09 03:21:13.095613] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:18:29.972 [2024-10-09 03:21:13.095641] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:29.972 [2024-10-09 03:21:13.095921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:18:29.972 [2024-10-09 03:21:13.096113] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:18:29.972 [2024-10-09 03:21:13.096132] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:18:29.972 [2024-10-09 03:21:13.096294] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:29.972 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.972 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:29.972 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:29.972 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:29.972 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:29.972 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:29.972 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:29.972 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.972 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.972 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.972 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.972 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.972 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.972 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:29.972 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:29.972 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.972 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.972 "name": "raid_bdev1", 00:18:29.972 "uuid": "ada6e14b-62ea-469a-a6da-8622aff995f9", 00:18:29.972 "strip_size_kb": 0, 00:18:29.972 "state": "online", 00:18:29.972 "raid_level": "raid1", 00:18:29.972 "superblock": true, 00:18:29.972 "num_base_bdevs": 2, 00:18:29.972 "num_base_bdevs_discovered": 2, 00:18:29.972 "num_base_bdevs_operational": 2, 00:18:29.972 "base_bdevs_list": [ 00:18:29.972 { 00:18:29.972 "name": "spare", 00:18:29.972 "uuid": "1926dd4b-f20f-5a34-b395-8e40593b7f02", 00:18:29.972 "is_configured": true, 00:18:29.972 "data_offset": 256, 00:18:29.972 "data_size": 7936 00:18:29.972 }, 00:18:29.972 { 00:18:29.972 "name": "BaseBdev2", 00:18:29.972 "uuid": "6375c41b-603f-59ee-a540-0ca38e599b57", 00:18:29.972 "is_configured": true, 00:18:29.972 "data_offset": 256, 00:18:29.972 "data_size": 7936 00:18:29.972 } 00:18:29.972 ] 00:18:29.972 }' 00:18:29.972 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.972 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:30.232 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:30.232 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:30.232 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:30.232 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:30.232 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:30.232 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.232 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.232 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.232 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:30.232 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.233 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:30.233 "name": "raid_bdev1", 00:18:30.233 "uuid": "ada6e14b-62ea-469a-a6da-8622aff995f9", 00:18:30.233 "strip_size_kb": 0, 00:18:30.233 "state": "online", 00:18:30.233 "raid_level": "raid1", 00:18:30.233 "superblock": true, 00:18:30.233 "num_base_bdevs": 2, 00:18:30.233 "num_base_bdevs_discovered": 2, 00:18:30.233 "num_base_bdevs_operational": 2, 00:18:30.233 "base_bdevs_list": [ 00:18:30.233 { 00:18:30.233 "name": "spare", 00:18:30.233 "uuid": "1926dd4b-f20f-5a34-b395-8e40593b7f02", 00:18:30.233 "is_configured": true, 00:18:30.233 "data_offset": 256, 00:18:30.233 "data_size": 7936 00:18:30.233 }, 00:18:30.233 { 00:18:30.233 "name": "BaseBdev2", 00:18:30.233 "uuid": "6375c41b-603f-59ee-a540-0ca38e599b57", 00:18:30.233 "is_configured": true, 00:18:30.233 "data_offset": 256, 00:18:30.233 "data_size": 7936 00:18:30.233 } 00:18:30.233 ] 00:18:30.233 }' 00:18:30.233 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:30.493 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:30.493 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:30.493 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:30.493 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:30.493 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.493 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.493 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:30.493 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.493 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:18:30.493 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:30.493 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.493 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:30.493 [2024-10-09 03:21:13.631689] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:30.493 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.493 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:30.493 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:30.493 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:30.493 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:30.493 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:30.493 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:30.493 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.493 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.493 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.493 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.493 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.493 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.493 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:30.493 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:30.493 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.493 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.493 "name": "raid_bdev1", 00:18:30.493 "uuid": "ada6e14b-62ea-469a-a6da-8622aff995f9", 00:18:30.493 "strip_size_kb": 0, 00:18:30.493 "state": "online", 00:18:30.493 "raid_level": "raid1", 00:18:30.493 "superblock": true, 00:18:30.493 "num_base_bdevs": 2, 00:18:30.493 "num_base_bdevs_discovered": 1, 00:18:30.493 "num_base_bdevs_operational": 1, 00:18:30.493 "base_bdevs_list": [ 00:18:30.493 { 00:18:30.493 "name": null, 00:18:30.493 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:30.493 "is_configured": false, 00:18:30.493 "data_offset": 0, 00:18:30.493 "data_size": 7936 00:18:30.493 }, 00:18:30.493 { 00:18:30.493 "name": "BaseBdev2", 00:18:30.493 "uuid": "6375c41b-603f-59ee-a540-0ca38e599b57", 00:18:30.493 "is_configured": true, 00:18:30.493 "data_offset": 256, 00:18:30.493 "data_size": 7936 00:18:30.493 } 00:18:30.493 ] 00:18:30.493 }' 00:18:30.493 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.493 03:21:13 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.063 03:21:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:31.063 03:21:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.063 03:21:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:31.063 [2024-10-09 03:21:14.090929] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:31.063 [2024-10-09 03:21:14.091101] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:31.063 [2024-10-09 03:21:14.091161] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:31.063 [2024-10-09 03:21:14.091209] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:31.063 [2024-10-09 03:21:14.106342] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:18:31.063 03:21:14 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.063 03:21:14 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:31.063 [2024-10-09 03:21:14.108359] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:32.003 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:32.003 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:32.003 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:32.003 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:32.003 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:32.003 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.003 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.003 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.003 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:32.003 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.003 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:32.003 "name": "raid_bdev1", 00:18:32.003 "uuid": "ada6e14b-62ea-469a-a6da-8622aff995f9", 00:18:32.003 "strip_size_kb": 0, 00:18:32.003 "state": "online", 00:18:32.003 "raid_level": "raid1", 00:18:32.003 "superblock": true, 00:18:32.003 "num_base_bdevs": 2, 00:18:32.003 "num_base_bdevs_discovered": 2, 00:18:32.003 "num_base_bdevs_operational": 2, 00:18:32.003 "process": { 00:18:32.003 "type": "rebuild", 00:18:32.003 "target": "spare", 00:18:32.003 "progress": { 00:18:32.003 "blocks": 2560, 00:18:32.003 "percent": 32 00:18:32.003 } 00:18:32.003 }, 00:18:32.003 "base_bdevs_list": [ 00:18:32.003 { 00:18:32.003 "name": "spare", 00:18:32.003 "uuid": "1926dd4b-f20f-5a34-b395-8e40593b7f02", 00:18:32.003 "is_configured": true, 00:18:32.003 "data_offset": 256, 00:18:32.003 "data_size": 7936 00:18:32.003 }, 00:18:32.003 { 00:18:32.003 "name": "BaseBdev2", 00:18:32.003 "uuid": "6375c41b-603f-59ee-a540-0ca38e599b57", 00:18:32.003 "is_configured": true, 00:18:32.003 "data_offset": 256, 00:18:32.003 "data_size": 7936 00:18:32.003 } 00:18:32.003 ] 00:18:32.003 }' 00:18:32.003 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:32.003 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:32.003 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:32.003 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:32.003 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:32.003 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.003 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:32.003 [2024-10-09 03:21:15.264183] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:32.262 [2024-10-09 03:21:15.316880] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:32.262 [2024-10-09 03:21:15.316940] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:32.262 [2024-10-09 03:21:15.316954] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:32.262 [2024-10-09 03:21:15.316964] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:32.262 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.262 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:32.262 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:32.262 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.262 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:32.262 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:32.262 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:32.262 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.262 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.262 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.262 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.262 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.262 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:32.262 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.262 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:32.262 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.262 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.262 "name": "raid_bdev1", 00:18:32.262 "uuid": "ada6e14b-62ea-469a-a6da-8622aff995f9", 00:18:32.262 "strip_size_kb": 0, 00:18:32.262 "state": "online", 00:18:32.262 "raid_level": "raid1", 00:18:32.262 "superblock": true, 00:18:32.262 "num_base_bdevs": 2, 00:18:32.262 "num_base_bdevs_discovered": 1, 00:18:32.262 "num_base_bdevs_operational": 1, 00:18:32.262 "base_bdevs_list": [ 00:18:32.262 { 00:18:32.262 "name": null, 00:18:32.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:32.262 "is_configured": false, 00:18:32.262 "data_offset": 0, 00:18:32.262 "data_size": 7936 00:18:32.262 }, 00:18:32.262 { 00:18:32.262 "name": "BaseBdev2", 00:18:32.262 "uuid": "6375c41b-603f-59ee-a540-0ca38e599b57", 00:18:32.262 "is_configured": true, 00:18:32.262 "data_offset": 256, 00:18:32.262 "data_size": 7936 00:18:32.262 } 00:18:32.262 ] 00:18:32.262 }' 00:18:32.262 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.262 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:32.522 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:32.522 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.522 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:32.522 [2024-10-09 03:21:15.749487] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:32.522 [2024-10-09 03:21:15.749602] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:32.522 [2024-10-09 03:21:15.749641] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:32.522 [2024-10-09 03:21:15.749671] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:32.522 [2024-10-09 03:21:15.750237] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:32.522 [2024-10-09 03:21:15.750302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:32.522 [2024-10-09 03:21:15.750411] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:32.522 [2024-10-09 03:21:15.750453] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:32.522 [2024-10-09 03:21:15.750490] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:32.522 [2024-10-09 03:21:15.750543] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:32.522 [2024-10-09 03:21:15.764771] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:18:32.522 spare 00:18:32.522 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.522 03:21:15 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:32.522 [2024-10-09 03:21:15.766864] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:33.902 03:21:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:33.902 03:21:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:33.902 03:21:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:33.902 03:21:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:33.902 03:21:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:33.902 03:21:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.902 03:21:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.902 03:21:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.902 03:21:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:33.902 03:21:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.902 03:21:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:33.902 "name": "raid_bdev1", 00:18:33.902 "uuid": "ada6e14b-62ea-469a-a6da-8622aff995f9", 00:18:33.902 "strip_size_kb": 0, 00:18:33.902 "state": "online", 00:18:33.902 "raid_level": "raid1", 00:18:33.902 "superblock": true, 00:18:33.903 "num_base_bdevs": 2, 00:18:33.903 "num_base_bdevs_discovered": 2, 00:18:33.903 "num_base_bdevs_operational": 2, 00:18:33.903 "process": { 00:18:33.903 "type": "rebuild", 00:18:33.903 "target": "spare", 00:18:33.903 "progress": { 00:18:33.903 "blocks": 2560, 00:18:33.903 "percent": 32 00:18:33.903 } 00:18:33.903 }, 00:18:33.903 "base_bdevs_list": [ 00:18:33.903 { 00:18:33.903 "name": "spare", 00:18:33.903 "uuid": "1926dd4b-f20f-5a34-b395-8e40593b7f02", 00:18:33.903 "is_configured": true, 00:18:33.903 "data_offset": 256, 00:18:33.903 "data_size": 7936 00:18:33.903 }, 00:18:33.903 { 00:18:33.903 "name": "BaseBdev2", 00:18:33.903 "uuid": "6375c41b-603f-59ee-a540-0ca38e599b57", 00:18:33.903 "is_configured": true, 00:18:33.903 "data_offset": 256, 00:18:33.903 "data_size": 7936 00:18:33.903 } 00:18:33.903 ] 00:18:33.903 }' 00:18:33.903 03:21:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:33.903 03:21:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:33.903 03:21:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:33.903 03:21:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:33.903 03:21:16 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:33.903 03:21:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.903 03:21:16 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:33.903 [2024-10-09 03:21:16.926490] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:33.903 [2024-10-09 03:21:16.975188] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:33.903 [2024-10-09 03:21:16.975246] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:33.903 [2024-10-09 03:21:16.975264] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:33.903 [2024-10-09 03:21:16.975272] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:33.903 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.903 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:33.903 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:33.903 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:33.903 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:33.903 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:33.903 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:33.903 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:33.903 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:33.903 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:33.903 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:33.903 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:33.903 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:33.903 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.903 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:33.903 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.903 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:33.903 "name": "raid_bdev1", 00:18:33.903 "uuid": "ada6e14b-62ea-469a-a6da-8622aff995f9", 00:18:33.903 "strip_size_kb": 0, 00:18:33.903 "state": "online", 00:18:33.903 "raid_level": "raid1", 00:18:33.903 "superblock": true, 00:18:33.903 "num_base_bdevs": 2, 00:18:33.903 "num_base_bdevs_discovered": 1, 00:18:33.903 "num_base_bdevs_operational": 1, 00:18:33.903 "base_bdevs_list": [ 00:18:33.903 { 00:18:33.903 "name": null, 00:18:33.903 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:33.903 "is_configured": false, 00:18:33.903 "data_offset": 0, 00:18:33.903 "data_size": 7936 00:18:33.903 }, 00:18:33.903 { 00:18:33.903 "name": "BaseBdev2", 00:18:33.903 "uuid": "6375c41b-603f-59ee-a540-0ca38e599b57", 00:18:33.903 "is_configured": true, 00:18:33.903 "data_offset": 256, 00:18:33.903 "data_size": 7936 00:18:33.903 } 00:18:33.903 ] 00:18:33.903 }' 00:18:33.903 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:33.903 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:34.470 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:34.470 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:34.471 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:34.471 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:34.471 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:34.471 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:34.471 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.471 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:34.471 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:34.471 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.471 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:34.471 "name": "raid_bdev1", 00:18:34.471 "uuid": "ada6e14b-62ea-469a-a6da-8622aff995f9", 00:18:34.471 "strip_size_kb": 0, 00:18:34.471 "state": "online", 00:18:34.471 "raid_level": "raid1", 00:18:34.471 "superblock": true, 00:18:34.471 "num_base_bdevs": 2, 00:18:34.471 "num_base_bdevs_discovered": 1, 00:18:34.471 "num_base_bdevs_operational": 1, 00:18:34.471 "base_bdevs_list": [ 00:18:34.471 { 00:18:34.471 "name": null, 00:18:34.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:34.471 "is_configured": false, 00:18:34.471 "data_offset": 0, 00:18:34.471 "data_size": 7936 00:18:34.471 }, 00:18:34.471 { 00:18:34.471 "name": "BaseBdev2", 00:18:34.471 "uuid": "6375c41b-603f-59ee-a540-0ca38e599b57", 00:18:34.471 "is_configured": true, 00:18:34.471 "data_offset": 256, 00:18:34.471 "data_size": 7936 00:18:34.471 } 00:18:34.471 ] 00:18:34.471 }' 00:18:34.471 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:34.471 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:34.471 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:34.471 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:34.471 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:34.471 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.471 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:34.471 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.471 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:34.471 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.471 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:34.471 [2024-10-09 03:21:17.615570] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:34.471 [2024-10-09 03:21:17.615629] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:34.471 [2024-10-09 03:21:17.615655] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:34.471 [2024-10-09 03:21:17.615664] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:34.471 [2024-10-09 03:21:17.616174] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:34.471 [2024-10-09 03:21:17.616194] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:34.471 [2024-10-09 03:21:17.616269] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:34.471 [2024-10-09 03:21:17.616284] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:34.471 [2024-10-09 03:21:17.616301] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:34.471 [2024-10-09 03:21:17.616312] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:34.471 BaseBdev1 00:18:34.471 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.471 03:21:17 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:35.410 03:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:35.410 03:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:35.410 03:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:35.410 03:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:35.410 03:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:35.410 03:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:35.410 03:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:35.410 03:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:35.410 03:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:35.410 03:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:35.410 03:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.410 03:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.410 03:21:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.410 03:21:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:35.410 03:21:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.410 03:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:35.410 "name": "raid_bdev1", 00:18:35.410 "uuid": "ada6e14b-62ea-469a-a6da-8622aff995f9", 00:18:35.410 "strip_size_kb": 0, 00:18:35.410 "state": "online", 00:18:35.410 "raid_level": "raid1", 00:18:35.410 "superblock": true, 00:18:35.410 "num_base_bdevs": 2, 00:18:35.410 "num_base_bdevs_discovered": 1, 00:18:35.410 "num_base_bdevs_operational": 1, 00:18:35.410 "base_bdevs_list": [ 00:18:35.410 { 00:18:35.410 "name": null, 00:18:35.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.410 "is_configured": false, 00:18:35.410 "data_offset": 0, 00:18:35.410 "data_size": 7936 00:18:35.410 }, 00:18:35.410 { 00:18:35.410 "name": "BaseBdev2", 00:18:35.410 "uuid": "6375c41b-603f-59ee-a540-0ca38e599b57", 00:18:35.410 "is_configured": true, 00:18:35.410 "data_offset": 256, 00:18:35.410 "data_size": 7936 00:18:35.410 } 00:18:35.410 ] 00:18:35.410 }' 00:18:35.410 03:21:18 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:35.410 03:21:18 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:35.980 03:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:35.980 03:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:35.980 03:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:35.980 03:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:35.980 03:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:35.980 03:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:35.980 03:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.980 03:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:35.980 03:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:35.980 03:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.980 03:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:35.980 "name": "raid_bdev1", 00:18:35.980 "uuid": "ada6e14b-62ea-469a-a6da-8622aff995f9", 00:18:35.980 "strip_size_kb": 0, 00:18:35.980 "state": "online", 00:18:35.980 "raid_level": "raid1", 00:18:35.980 "superblock": true, 00:18:35.980 "num_base_bdevs": 2, 00:18:35.980 "num_base_bdevs_discovered": 1, 00:18:35.980 "num_base_bdevs_operational": 1, 00:18:35.980 "base_bdevs_list": [ 00:18:35.980 { 00:18:35.980 "name": null, 00:18:35.980 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:35.980 "is_configured": false, 00:18:35.980 "data_offset": 0, 00:18:35.980 "data_size": 7936 00:18:35.980 }, 00:18:35.980 { 00:18:35.980 "name": "BaseBdev2", 00:18:35.980 "uuid": "6375c41b-603f-59ee-a540-0ca38e599b57", 00:18:35.980 "is_configured": true, 00:18:35.980 "data_offset": 256, 00:18:35.980 "data_size": 7936 00:18:35.980 } 00:18:35.980 ] 00:18:35.980 }' 00:18:35.980 03:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:35.980 03:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:35.980 03:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:35.980 03:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:35.980 03:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:35.980 03:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@650 -- # local es=0 00:18:35.980 03:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:35.980 03:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:35.980 03:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:35.980 03:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:35.980 03:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:35.980 03:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:35.980 03:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.980 03:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:35.980 [2024-10-09 03:21:19.217102] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:35.980 [2024-10-09 03:21:19.217303] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:35.980 [2024-10-09 03:21:19.217326] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:35.980 request: 00:18:35.980 { 00:18:35.980 "base_bdev": "BaseBdev1", 00:18:35.980 "raid_bdev": "raid_bdev1", 00:18:35.980 "method": "bdev_raid_add_base_bdev", 00:18:35.980 "req_id": 1 00:18:35.980 } 00:18:35.980 Got JSON-RPC error response 00:18:35.980 response: 00:18:35.980 { 00:18:35.980 "code": -22, 00:18:35.980 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:35.980 } 00:18:35.980 03:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:35.980 03:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@653 -- # es=1 00:18:35.980 03:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:35.980 03:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:35.980 03:21:19 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:35.980 03:21:19 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:37.362 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:37.362 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:37.362 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:37.362 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:37.362 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:37.362 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:37.362 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.362 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.362 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.362 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.362 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.362 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.362 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.362 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:37.362 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.362 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.362 "name": "raid_bdev1", 00:18:37.362 "uuid": "ada6e14b-62ea-469a-a6da-8622aff995f9", 00:18:37.362 "strip_size_kb": 0, 00:18:37.362 "state": "online", 00:18:37.362 "raid_level": "raid1", 00:18:37.362 "superblock": true, 00:18:37.362 "num_base_bdevs": 2, 00:18:37.362 "num_base_bdevs_discovered": 1, 00:18:37.362 "num_base_bdevs_operational": 1, 00:18:37.362 "base_bdevs_list": [ 00:18:37.362 { 00:18:37.362 "name": null, 00:18:37.362 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.362 "is_configured": false, 00:18:37.362 "data_offset": 0, 00:18:37.362 "data_size": 7936 00:18:37.362 }, 00:18:37.362 { 00:18:37.362 "name": "BaseBdev2", 00:18:37.362 "uuid": "6375c41b-603f-59ee-a540-0ca38e599b57", 00:18:37.362 "is_configured": true, 00:18:37.362 "data_offset": 256, 00:18:37.362 "data_size": 7936 00:18:37.362 } 00:18:37.362 ] 00:18:37.362 }' 00:18:37.362 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.362 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:37.623 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:37.623 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:37.623 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:37.623 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:37.623 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:37.623 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.623 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.623 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.623 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:37.623 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.623 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:37.623 "name": "raid_bdev1", 00:18:37.623 "uuid": "ada6e14b-62ea-469a-a6da-8622aff995f9", 00:18:37.623 "strip_size_kb": 0, 00:18:37.623 "state": "online", 00:18:37.623 "raid_level": "raid1", 00:18:37.623 "superblock": true, 00:18:37.623 "num_base_bdevs": 2, 00:18:37.623 "num_base_bdevs_discovered": 1, 00:18:37.623 "num_base_bdevs_operational": 1, 00:18:37.623 "base_bdevs_list": [ 00:18:37.623 { 00:18:37.623 "name": null, 00:18:37.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:37.623 "is_configured": false, 00:18:37.623 "data_offset": 0, 00:18:37.623 "data_size": 7936 00:18:37.623 }, 00:18:37.623 { 00:18:37.623 "name": "BaseBdev2", 00:18:37.623 "uuid": "6375c41b-603f-59ee-a540-0ca38e599b57", 00:18:37.623 "is_configured": true, 00:18:37.623 "data_offset": 256, 00:18:37.623 "data_size": 7936 00:18:37.623 } 00:18:37.623 ] 00:18:37.623 }' 00:18:37.623 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:37.623 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:37.623 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:37.623 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:37.623 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86788 00:18:37.623 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@950 -- # '[' -z 86788 ']' 00:18:37.623 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # kill -0 86788 00:18:37.623 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # uname 00:18:37.623 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:37.623 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 86788 00:18:37.623 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:37.623 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:37.623 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@968 -- # echo 'killing process with pid 86788' 00:18:37.623 killing process with pid 86788 00:18:37.623 Received shutdown signal, test time was about 60.000000 seconds 00:18:37.623 00:18:37.623 Latency(us) 00:18:37.623 [2024-10-09T03:21:20.926Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.623 [2024-10-09T03:21:20.926Z] =================================================================================================================== 00:18:37.623 [2024-10-09T03:21:20.926Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:37.623 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@969 -- # kill 86788 00:18:37.623 [2024-10-09 03:21:20.845893] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:37.623 [2024-10-09 03:21:20.846013] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:37.623 [2024-10-09 03:21:20.846057] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:37.623 [2024-10-09 03:21:20.846070] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:37.623 03:21:20 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@974 -- # wait 86788 00:18:37.884 [2024-10-09 03:21:21.151147] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:39.267 03:21:22 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:18:39.267 00:18:39.267 real 0m19.918s 00:18:39.267 user 0m25.762s 00:18:39.267 sys 0m2.770s 00:18:39.267 ************************************ 00:18:39.267 END TEST raid_rebuild_test_sb_4k 00:18:39.267 ************************************ 00:18:39.267 03:21:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:39.267 03:21:22 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:18:39.267 03:21:22 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:18:39.267 03:21:22 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:18:39.267 03:21:22 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:18:39.267 03:21:22 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:39.267 03:21:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:39.267 ************************************ 00:18:39.267 START TEST raid_state_function_test_sb_md_separate 00:18:39.267 ************************************ 00:18:39.267 03:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:18:39.267 03:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:18:39.267 03:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:18:39.267 03:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:39.267 03:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:39.267 03:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:39.267 03:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:39.267 03:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:39.267 03:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:39.267 03:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:39.267 03:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:39.267 03:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:39.267 03:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:39.267 03:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:39.267 03:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:39.267 03:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:39.267 03:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:39.267 03:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:39.267 03:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:39.267 03:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:18:39.267 03:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:18:39.267 03:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:39.267 03:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:39.267 Process raid pid: 87481 00:18:39.267 03:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87481 00:18:39.267 03:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:39.267 03:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87481' 00:18:39.267 03:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87481 00:18:39.267 03:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 87481 ']' 00:18:39.267 03:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.267 03:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:39.267 03:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.267 03:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:39.267 03:21:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:39.528 [2024-10-09 03:21:22.616669] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:18:39.528 [2024-10-09 03:21:22.616910] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.528 [2024-10-09 03:21:22.781539] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.787 [2024-10-09 03:21:23.027364] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.047 [2024-10-09 03:21:23.273476] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:40.047 [2024-10-09 03:21:23.273587] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:40.310 03:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:40.310 03:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:18:40.310 03:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:40.310 03:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.310 03:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.310 [2024-10-09 03:21:23.458528] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:40.310 [2024-10-09 03:21:23.458583] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:40.310 [2024-10-09 03:21:23.458596] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:40.310 [2024-10-09 03:21:23.458607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:40.310 03:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.310 03:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:40.310 03:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:40.310 03:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:40.310 03:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:40.310 03:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:40.310 03:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:40.310 03:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.310 03:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.310 03:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.310 03:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.310 03:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.310 03:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:40.310 03:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.310 03:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.310 03:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.310 03:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.310 "name": "Existed_Raid", 00:18:40.310 "uuid": "b4600bb9-be6f-425b-8164-fc61452734f0", 00:18:40.310 "strip_size_kb": 0, 00:18:40.310 "state": "configuring", 00:18:40.310 "raid_level": "raid1", 00:18:40.310 "superblock": true, 00:18:40.310 "num_base_bdevs": 2, 00:18:40.310 "num_base_bdevs_discovered": 0, 00:18:40.310 "num_base_bdevs_operational": 2, 00:18:40.310 "base_bdevs_list": [ 00:18:40.310 { 00:18:40.310 "name": "BaseBdev1", 00:18:40.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.310 "is_configured": false, 00:18:40.310 "data_offset": 0, 00:18:40.310 "data_size": 0 00:18:40.310 }, 00:18:40.310 { 00:18:40.310 "name": "BaseBdev2", 00:18:40.310 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.310 "is_configured": false, 00:18:40.310 "data_offset": 0, 00:18:40.310 "data_size": 0 00:18:40.310 } 00:18:40.310 ] 00:18:40.310 }' 00:18:40.310 03:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.310 03:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.908 03:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:40.908 03:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.908 03:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.908 [2024-10-09 03:21:23.917633] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:40.908 [2024-10-09 03:21:23.917725] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:40.908 03:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.908 03:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:40.908 03:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.908 03:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.908 [2024-10-09 03:21:23.929639] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:40.908 [2024-10-09 03:21:23.929713] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:40.908 [2024-10-09 03:21:23.929738] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:40.908 [2024-10-09 03:21:23.929761] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:40.908 03:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.908 03:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:18:40.908 03:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.908 03:21:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.908 [2024-10-09 03:21:24.014820] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:40.908 BaseBdev1 00:18:40.908 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.908 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:40.908 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:18:40.908 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:40.908 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:18:40.908 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:40.908 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:40.908 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:40.908 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.908 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.908 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.908 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:40.908 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.908 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.908 [ 00:18:40.908 { 00:18:40.908 "name": "BaseBdev1", 00:18:40.908 "aliases": [ 00:18:40.908 "3187f5dd-e086-4ea8-93c4-40c5245839c7" 00:18:40.908 ], 00:18:40.908 "product_name": "Malloc disk", 00:18:40.908 "block_size": 4096, 00:18:40.908 "num_blocks": 8192, 00:18:40.908 "uuid": "3187f5dd-e086-4ea8-93c4-40c5245839c7", 00:18:40.908 "md_size": 32, 00:18:40.908 "md_interleave": false, 00:18:40.908 "dif_type": 0, 00:18:40.908 "assigned_rate_limits": { 00:18:40.908 "rw_ios_per_sec": 0, 00:18:40.908 "rw_mbytes_per_sec": 0, 00:18:40.908 "r_mbytes_per_sec": 0, 00:18:40.908 "w_mbytes_per_sec": 0 00:18:40.908 }, 00:18:40.908 "claimed": true, 00:18:40.908 "claim_type": "exclusive_write", 00:18:40.908 "zoned": false, 00:18:40.908 "supported_io_types": { 00:18:40.908 "read": true, 00:18:40.908 "write": true, 00:18:40.908 "unmap": true, 00:18:40.908 "flush": true, 00:18:40.908 "reset": true, 00:18:40.908 "nvme_admin": false, 00:18:40.908 "nvme_io": false, 00:18:40.908 "nvme_io_md": false, 00:18:40.908 "write_zeroes": true, 00:18:40.908 "zcopy": true, 00:18:40.908 "get_zone_info": false, 00:18:40.908 "zone_management": false, 00:18:40.908 "zone_append": false, 00:18:40.908 "compare": false, 00:18:40.908 "compare_and_write": false, 00:18:40.908 "abort": true, 00:18:40.908 "seek_hole": false, 00:18:40.908 "seek_data": false, 00:18:40.908 "copy": true, 00:18:40.908 "nvme_iov_md": false 00:18:40.908 }, 00:18:40.908 "memory_domains": [ 00:18:40.908 { 00:18:40.908 "dma_device_id": "system", 00:18:40.908 "dma_device_type": 1 00:18:40.908 }, 00:18:40.908 { 00:18:40.908 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:40.908 "dma_device_type": 2 00:18:40.908 } 00:18:40.908 ], 00:18:40.908 "driver_specific": {} 00:18:40.908 } 00:18:40.908 ] 00:18:40.908 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.908 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:18:40.908 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:40.908 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:40.908 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:40.908 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:40.908 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:40.908 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:40.908 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.908 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.908 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.908 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.908 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:40.908 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.908 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.908 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:40.908 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.908 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.908 "name": "Existed_Raid", 00:18:40.908 "uuid": "a04a49ef-3a83-45af-9fe3-285242f0a61f", 00:18:40.908 "strip_size_kb": 0, 00:18:40.908 "state": "configuring", 00:18:40.908 "raid_level": "raid1", 00:18:40.908 "superblock": true, 00:18:40.908 "num_base_bdevs": 2, 00:18:40.908 "num_base_bdevs_discovered": 1, 00:18:40.908 "num_base_bdevs_operational": 2, 00:18:40.908 "base_bdevs_list": [ 00:18:40.908 { 00:18:40.908 "name": "BaseBdev1", 00:18:40.908 "uuid": "3187f5dd-e086-4ea8-93c4-40c5245839c7", 00:18:40.908 "is_configured": true, 00:18:40.908 "data_offset": 256, 00:18:40.908 "data_size": 7936 00:18:40.908 }, 00:18:40.908 { 00:18:40.908 "name": "BaseBdev2", 00:18:40.908 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.908 "is_configured": false, 00:18:40.908 "data_offset": 0, 00:18:40.908 "data_size": 0 00:18:40.908 } 00:18:40.908 ] 00:18:40.908 }' 00:18:40.908 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.908 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.479 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:41.479 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.479 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.479 [2024-10-09 03:21:24.521944] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:41.479 [2024-10-09 03:21:24.522038] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:41.479 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.479 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:18:41.479 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.479 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.479 [2024-10-09 03:21:24.533988] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:41.479 [2024-10-09 03:21:24.535904] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:41.479 [2024-10-09 03:21:24.535943] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:41.479 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.479 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:41.479 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:41.479 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:18:41.479 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:41.479 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:41.479 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:41.479 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:41.479 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:41.479 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.479 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.479 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.479 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.479 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.479 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.479 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.479 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.479 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.479 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.479 "name": "Existed_Raid", 00:18:41.479 "uuid": "328369fa-854c-4689-9910-3ecec00f9ce2", 00:18:41.479 "strip_size_kb": 0, 00:18:41.479 "state": "configuring", 00:18:41.479 "raid_level": "raid1", 00:18:41.479 "superblock": true, 00:18:41.479 "num_base_bdevs": 2, 00:18:41.479 "num_base_bdevs_discovered": 1, 00:18:41.479 "num_base_bdevs_operational": 2, 00:18:41.479 "base_bdevs_list": [ 00:18:41.479 { 00:18:41.479 "name": "BaseBdev1", 00:18:41.479 "uuid": "3187f5dd-e086-4ea8-93c4-40c5245839c7", 00:18:41.479 "is_configured": true, 00:18:41.479 "data_offset": 256, 00:18:41.479 "data_size": 7936 00:18:41.479 }, 00:18:41.479 { 00:18:41.479 "name": "BaseBdev2", 00:18:41.479 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.479 "is_configured": false, 00:18:41.479 "data_offset": 0, 00:18:41.479 "data_size": 0 00:18:41.479 } 00:18:41.479 ] 00:18:41.479 }' 00:18:41.479 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.479 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.740 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:18:41.740 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.740 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.740 [2024-10-09 03:21:24.988508] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:41.740 [2024-10-09 03:21:24.988849] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:41.740 [2024-10-09 03:21:24.988902] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:41.740 [2024-10-09 03:21:24.989013] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:41.740 [2024-10-09 03:21:24.989159] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:41.740 [2024-10-09 03:21:24.989199] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:41.740 [2024-10-09 03:21:24.989340] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:41.740 BaseBdev2 00:18:41.740 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.740 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:41.740 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:18:41.740 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:18:41.740 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@901 -- # local i 00:18:41.740 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:18:41.740 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:18:41.740 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:18:41.740 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.740 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.740 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.740 03:21:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:41.740 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.740 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.740 [ 00:18:41.740 { 00:18:41.740 "name": "BaseBdev2", 00:18:41.740 "aliases": [ 00:18:41.740 "62ae542b-f27a-4a9d-ae68-29328cc70645" 00:18:41.740 ], 00:18:41.740 "product_name": "Malloc disk", 00:18:41.740 "block_size": 4096, 00:18:41.740 "num_blocks": 8192, 00:18:41.740 "uuid": "62ae542b-f27a-4a9d-ae68-29328cc70645", 00:18:41.740 "md_size": 32, 00:18:41.740 "md_interleave": false, 00:18:41.740 "dif_type": 0, 00:18:41.740 "assigned_rate_limits": { 00:18:41.740 "rw_ios_per_sec": 0, 00:18:41.740 "rw_mbytes_per_sec": 0, 00:18:41.740 "r_mbytes_per_sec": 0, 00:18:41.740 "w_mbytes_per_sec": 0 00:18:41.740 }, 00:18:41.740 "claimed": true, 00:18:41.740 "claim_type": "exclusive_write", 00:18:41.740 "zoned": false, 00:18:41.740 "supported_io_types": { 00:18:41.740 "read": true, 00:18:41.740 "write": true, 00:18:41.740 "unmap": true, 00:18:41.740 "flush": true, 00:18:41.740 "reset": true, 00:18:41.740 "nvme_admin": false, 00:18:41.740 "nvme_io": false, 00:18:41.740 "nvme_io_md": false, 00:18:41.740 "write_zeroes": true, 00:18:41.740 "zcopy": true, 00:18:41.740 "get_zone_info": false, 00:18:41.740 "zone_management": false, 00:18:41.740 "zone_append": false, 00:18:41.740 "compare": false, 00:18:41.740 "compare_and_write": false, 00:18:41.740 "abort": true, 00:18:41.740 "seek_hole": false, 00:18:41.740 "seek_data": false, 00:18:41.740 "copy": true, 00:18:41.740 "nvme_iov_md": false 00:18:41.740 }, 00:18:41.740 "memory_domains": [ 00:18:41.740 { 00:18:41.740 "dma_device_id": "system", 00:18:41.740 "dma_device_type": 1 00:18:41.740 }, 00:18:41.740 { 00:18:41.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:41.740 "dma_device_type": 2 00:18:41.740 } 00:18:41.740 ], 00:18:41.740 "driver_specific": {} 00:18:41.740 } 00:18:41.740 ] 00:18:41.740 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.740 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@907 -- # return 0 00:18:41.740 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:41.740 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:41.740 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:18:41.740 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:41.740 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:41.740 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:41.740 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:41.740 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:41.740 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.740 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.740 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.740 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.740 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.740 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.740 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:41.740 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:41.999 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.999 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.999 "name": "Existed_Raid", 00:18:41.999 "uuid": "328369fa-854c-4689-9910-3ecec00f9ce2", 00:18:41.999 "strip_size_kb": 0, 00:18:41.999 "state": "online", 00:18:41.999 "raid_level": "raid1", 00:18:41.999 "superblock": true, 00:18:41.999 "num_base_bdevs": 2, 00:18:41.999 "num_base_bdevs_discovered": 2, 00:18:41.999 "num_base_bdevs_operational": 2, 00:18:41.999 "base_bdevs_list": [ 00:18:41.999 { 00:18:41.999 "name": "BaseBdev1", 00:18:41.999 "uuid": "3187f5dd-e086-4ea8-93c4-40c5245839c7", 00:18:41.999 "is_configured": true, 00:18:41.999 "data_offset": 256, 00:18:41.999 "data_size": 7936 00:18:41.999 }, 00:18:41.999 { 00:18:41.999 "name": "BaseBdev2", 00:18:41.999 "uuid": "62ae542b-f27a-4a9d-ae68-29328cc70645", 00:18:41.999 "is_configured": true, 00:18:41.999 "data_offset": 256, 00:18:41.999 "data_size": 7936 00:18:41.999 } 00:18:41.999 ] 00:18:41.999 }' 00:18:42.000 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.000 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.259 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:42.259 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:42.259 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:42.259 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:42.259 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:42.259 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:42.259 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:42.259 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:42.259 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.259 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.259 [2024-10-09 03:21:25.476056] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:42.259 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.259 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:42.259 "name": "Existed_Raid", 00:18:42.259 "aliases": [ 00:18:42.259 "328369fa-854c-4689-9910-3ecec00f9ce2" 00:18:42.259 ], 00:18:42.259 "product_name": "Raid Volume", 00:18:42.259 "block_size": 4096, 00:18:42.259 "num_blocks": 7936, 00:18:42.259 "uuid": "328369fa-854c-4689-9910-3ecec00f9ce2", 00:18:42.259 "md_size": 32, 00:18:42.259 "md_interleave": false, 00:18:42.259 "dif_type": 0, 00:18:42.259 "assigned_rate_limits": { 00:18:42.259 "rw_ios_per_sec": 0, 00:18:42.259 "rw_mbytes_per_sec": 0, 00:18:42.259 "r_mbytes_per_sec": 0, 00:18:42.259 "w_mbytes_per_sec": 0 00:18:42.259 }, 00:18:42.259 "claimed": false, 00:18:42.259 "zoned": false, 00:18:42.259 "supported_io_types": { 00:18:42.259 "read": true, 00:18:42.259 "write": true, 00:18:42.259 "unmap": false, 00:18:42.259 "flush": false, 00:18:42.259 "reset": true, 00:18:42.259 "nvme_admin": false, 00:18:42.259 "nvme_io": false, 00:18:42.259 "nvme_io_md": false, 00:18:42.259 "write_zeroes": true, 00:18:42.259 "zcopy": false, 00:18:42.259 "get_zone_info": false, 00:18:42.259 "zone_management": false, 00:18:42.259 "zone_append": false, 00:18:42.259 "compare": false, 00:18:42.259 "compare_and_write": false, 00:18:42.259 "abort": false, 00:18:42.259 "seek_hole": false, 00:18:42.259 "seek_data": false, 00:18:42.259 "copy": false, 00:18:42.259 "nvme_iov_md": false 00:18:42.259 }, 00:18:42.259 "memory_domains": [ 00:18:42.259 { 00:18:42.259 "dma_device_id": "system", 00:18:42.259 "dma_device_type": 1 00:18:42.259 }, 00:18:42.259 { 00:18:42.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.259 "dma_device_type": 2 00:18:42.259 }, 00:18:42.259 { 00:18:42.259 "dma_device_id": "system", 00:18:42.259 "dma_device_type": 1 00:18:42.259 }, 00:18:42.259 { 00:18:42.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:42.259 "dma_device_type": 2 00:18:42.259 } 00:18:42.259 ], 00:18:42.259 "driver_specific": { 00:18:42.259 "raid": { 00:18:42.259 "uuid": "328369fa-854c-4689-9910-3ecec00f9ce2", 00:18:42.259 "strip_size_kb": 0, 00:18:42.259 "state": "online", 00:18:42.259 "raid_level": "raid1", 00:18:42.259 "superblock": true, 00:18:42.259 "num_base_bdevs": 2, 00:18:42.259 "num_base_bdevs_discovered": 2, 00:18:42.259 "num_base_bdevs_operational": 2, 00:18:42.259 "base_bdevs_list": [ 00:18:42.259 { 00:18:42.259 "name": "BaseBdev1", 00:18:42.259 "uuid": "3187f5dd-e086-4ea8-93c4-40c5245839c7", 00:18:42.259 "is_configured": true, 00:18:42.259 "data_offset": 256, 00:18:42.259 "data_size": 7936 00:18:42.259 }, 00:18:42.259 { 00:18:42.259 "name": "BaseBdev2", 00:18:42.259 "uuid": "62ae542b-f27a-4a9d-ae68-29328cc70645", 00:18:42.259 "is_configured": true, 00:18:42.259 "data_offset": 256, 00:18:42.259 "data_size": 7936 00:18:42.259 } 00:18:42.259 ] 00:18:42.259 } 00:18:42.259 } 00:18:42.259 }' 00:18:42.259 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:42.259 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:42.259 BaseBdev2' 00:18:42.260 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.520 [2024-10-09 03:21:25.671668] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:42.520 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.795 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.795 "name": "Existed_Raid", 00:18:42.795 "uuid": "328369fa-854c-4689-9910-3ecec00f9ce2", 00:18:42.795 "strip_size_kb": 0, 00:18:42.795 "state": "online", 00:18:42.795 "raid_level": "raid1", 00:18:42.795 "superblock": true, 00:18:42.795 "num_base_bdevs": 2, 00:18:42.795 "num_base_bdevs_discovered": 1, 00:18:42.795 "num_base_bdevs_operational": 1, 00:18:42.795 "base_bdevs_list": [ 00:18:42.795 { 00:18:42.795 "name": null, 00:18:42.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.795 "is_configured": false, 00:18:42.795 "data_offset": 0, 00:18:42.795 "data_size": 7936 00:18:42.795 }, 00:18:42.795 { 00:18:42.795 "name": "BaseBdev2", 00:18:42.795 "uuid": "62ae542b-f27a-4a9d-ae68-29328cc70645", 00:18:42.795 "is_configured": true, 00:18:42.795 "data_offset": 256, 00:18:42.795 "data_size": 7936 00:18:42.795 } 00:18:42.795 ] 00:18:42.795 }' 00:18:42.795 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.795 03:21:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.056 03:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:43.056 03:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:43.056 03:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.056 03:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:43.056 03:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.056 03:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.056 03:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.056 03:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:43.056 03:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:43.056 03:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:43.056 03:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.056 03:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.056 [2024-10-09 03:21:26.280024] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:43.056 [2024-10-09 03:21:26.280189] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:43.316 [2024-10-09 03:21:26.385918] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:43.316 [2024-10-09 03:21:26.386034] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:43.316 [2024-10-09 03:21:26.386080] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:43.316 03:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.316 03:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:43.316 03:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:43.316 03:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:43.316 03:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:43.316 03:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.316 03:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:43.316 03:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.316 03:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:43.316 03:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:43.316 03:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:18:43.316 03:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87481 00:18:43.316 03:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 87481 ']' 00:18:43.316 03:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 87481 00:18:43.316 03:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:18:43.316 03:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:43.316 03:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87481 00:18:43.316 03:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:43.316 03:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:43.316 03:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87481' 00:18:43.316 killing process with pid 87481 00:18:43.316 03:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 87481 00:18:43.316 [2024-10-09 03:21:26.489684] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:43.316 03:21:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 87481 00:18:43.316 [2024-10-09 03:21:26.505440] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:44.698 03:21:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:18:44.698 00:18:44.698 real 0m5.301s 00:18:44.698 user 0m7.356s 00:18:44.698 sys 0m1.016s 00:18:44.698 03:21:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:44.698 03:21:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.698 ************************************ 00:18:44.698 END TEST raid_state_function_test_sb_md_separate 00:18:44.698 ************************************ 00:18:44.698 03:21:27 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:18:44.698 03:21:27 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:18:44.698 03:21:27 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:44.698 03:21:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:44.698 ************************************ 00:18:44.698 START TEST raid_superblock_test_md_separate 00:18:44.698 ************************************ 00:18:44.698 03:21:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:18:44.698 03:21:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:18:44.698 03:21:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:18:44.698 03:21:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:44.698 03:21:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:44.698 03:21:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:44.698 03:21:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:44.698 03:21:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:44.698 03:21:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:44.698 03:21:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:44.698 03:21:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:44.698 03:21:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:44.698 03:21:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:44.698 03:21:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:44.699 03:21:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:18:44.699 03:21:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:18:44.699 03:21:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87740 00:18:44.699 03:21:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:44.699 03:21:27 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87740 00:18:44.699 03:21:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@831 -- # '[' -z 87740 ']' 00:18:44.699 03:21:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.699 03:21:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:44.699 03:21:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.699 03:21:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:44.699 03:21:27 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:44.699 [2024-10-09 03:21:27.977985] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:18:44.699 [2024-10-09 03:21:27.978182] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87740 ] 00:18:44.958 [2024-10-09 03:21:28.141369] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.219 [2024-10-09 03:21:28.368589] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.479 [2024-10-09 03:21:28.578239] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:45.479 [2024-10-09 03:21:28.578386] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:45.739 03:21:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:45.739 03:21:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # return 0 00:18:45.739 03:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:45.739 03:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:45.739 03:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:45.739 03:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:45.739 03:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:45.739 03:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:45.739 03:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:45.739 03:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:45.739 03:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:18:45.739 03:21:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.739 03:21:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.739 malloc1 00:18:45.739 03:21:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.739 03:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:45.739 03:21:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.739 03:21:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.739 [2024-10-09 03:21:28.858743] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:45.739 [2024-10-09 03:21:28.858898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:45.739 [2024-10-09 03:21:28.858944] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:45.739 [2024-10-09 03:21:28.858971] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:45.739 [2024-10-09 03:21:28.861076] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:45.739 [2024-10-09 03:21:28.861143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:45.739 pt1 00:18:45.739 03:21:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.739 03:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:45.739 03:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:45.739 03:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:45.739 03:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:45.739 03:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:45.739 03:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:45.739 03:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:45.739 03:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:45.739 03:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:18:45.739 03:21:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.739 03:21:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.739 malloc2 00:18:45.739 03:21:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.739 03:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:45.739 03:21:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.739 03:21:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.739 [2024-10-09 03:21:28.926926] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:45.739 [2024-10-09 03:21:28.927028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:45.739 [2024-10-09 03:21:28.927070] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:45.739 [2024-10-09 03:21:28.927096] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:45.739 [2024-10-09 03:21:28.929250] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:45.739 [2024-10-09 03:21:28.929321] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:45.739 pt2 00:18:45.740 03:21:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.740 03:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:45.740 03:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:45.740 03:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:18:45.740 03:21:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.740 03:21:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.740 [2024-10-09 03:21:28.938982] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:45.740 [2024-10-09 03:21:28.941014] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:45.740 [2024-10-09 03:21:28.941226] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:45.740 [2024-10-09 03:21:28.941273] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:45.740 [2024-10-09 03:21:28.941370] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:18:45.740 [2024-10-09 03:21:28.941528] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:45.740 [2024-10-09 03:21:28.941566] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:45.740 [2024-10-09 03:21:28.941692] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:45.740 03:21:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.740 03:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:45.740 03:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:45.740 03:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:45.740 03:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:45.740 03:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:45.740 03:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:45.740 03:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.740 03:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.740 03:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.740 03:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.740 03:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.740 03:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.740 03:21:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.740 03:21:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:45.740 03:21:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.740 03:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:45.740 "name": "raid_bdev1", 00:18:45.740 "uuid": "9f1fd400-f1b5-47fe-8f1b-9d040bfba22c", 00:18:45.740 "strip_size_kb": 0, 00:18:45.740 "state": "online", 00:18:45.740 "raid_level": "raid1", 00:18:45.740 "superblock": true, 00:18:45.740 "num_base_bdevs": 2, 00:18:45.740 "num_base_bdevs_discovered": 2, 00:18:45.740 "num_base_bdevs_operational": 2, 00:18:45.740 "base_bdevs_list": [ 00:18:45.740 { 00:18:45.740 "name": "pt1", 00:18:45.740 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:45.740 "is_configured": true, 00:18:45.740 "data_offset": 256, 00:18:45.740 "data_size": 7936 00:18:45.740 }, 00:18:45.740 { 00:18:45.740 "name": "pt2", 00:18:45.740 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:45.740 "is_configured": true, 00:18:45.740 "data_offset": 256, 00:18:45.740 "data_size": 7936 00:18:45.740 } 00:18:45.740 ] 00:18:45.740 }' 00:18:45.740 03:21:28 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:45.740 03:21:28 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.308 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:46.309 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:46.309 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:46.309 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:46.309 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:46.309 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:46.309 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:46.309 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:46.309 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.309 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.309 [2024-10-09 03:21:29.374368] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:46.309 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.309 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:46.309 "name": "raid_bdev1", 00:18:46.309 "aliases": [ 00:18:46.309 "9f1fd400-f1b5-47fe-8f1b-9d040bfba22c" 00:18:46.309 ], 00:18:46.309 "product_name": "Raid Volume", 00:18:46.309 "block_size": 4096, 00:18:46.309 "num_blocks": 7936, 00:18:46.309 "uuid": "9f1fd400-f1b5-47fe-8f1b-9d040bfba22c", 00:18:46.309 "md_size": 32, 00:18:46.309 "md_interleave": false, 00:18:46.309 "dif_type": 0, 00:18:46.309 "assigned_rate_limits": { 00:18:46.309 "rw_ios_per_sec": 0, 00:18:46.309 "rw_mbytes_per_sec": 0, 00:18:46.309 "r_mbytes_per_sec": 0, 00:18:46.309 "w_mbytes_per_sec": 0 00:18:46.309 }, 00:18:46.309 "claimed": false, 00:18:46.309 "zoned": false, 00:18:46.309 "supported_io_types": { 00:18:46.309 "read": true, 00:18:46.309 "write": true, 00:18:46.309 "unmap": false, 00:18:46.309 "flush": false, 00:18:46.309 "reset": true, 00:18:46.309 "nvme_admin": false, 00:18:46.309 "nvme_io": false, 00:18:46.309 "nvme_io_md": false, 00:18:46.309 "write_zeroes": true, 00:18:46.309 "zcopy": false, 00:18:46.309 "get_zone_info": false, 00:18:46.309 "zone_management": false, 00:18:46.309 "zone_append": false, 00:18:46.309 "compare": false, 00:18:46.309 "compare_and_write": false, 00:18:46.309 "abort": false, 00:18:46.309 "seek_hole": false, 00:18:46.309 "seek_data": false, 00:18:46.309 "copy": false, 00:18:46.309 "nvme_iov_md": false 00:18:46.309 }, 00:18:46.309 "memory_domains": [ 00:18:46.309 { 00:18:46.309 "dma_device_id": "system", 00:18:46.309 "dma_device_type": 1 00:18:46.309 }, 00:18:46.309 { 00:18:46.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:46.309 "dma_device_type": 2 00:18:46.309 }, 00:18:46.309 { 00:18:46.309 "dma_device_id": "system", 00:18:46.309 "dma_device_type": 1 00:18:46.309 }, 00:18:46.309 { 00:18:46.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:46.309 "dma_device_type": 2 00:18:46.309 } 00:18:46.309 ], 00:18:46.309 "driver_specific": { 00:18:46.309 "raid": { 00:18:46.309 "uuid": "9f1fd400-f1b5-47fe-8f1b-9d040bfba22c", 00:18:46.309 "strip_size_kb": 0, 00:18:46.309 "state": "online", 00:18:46.309 "raid_level": "raid1", 00:18:46.309 "superblock": true, 00:18:46.309 "num_base_bdevs": 2, 00:18:46.309 "num_base_bdevs_discovered": 2, 00:18:46.309 "num_base_bdevs_operational": 2, 00:18:46.309 "base_bdevs_list": [ 00:18:46.309 { 00:18:46.309 "name": "pt1", 00:18:46.309 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:46.309 "is_configured": true, 00:18:46.309 "data_offset": 256, 00:18:46.309 "data_size": 7936 00:18:46.309 }, 00:18:46.309 { 00:18:46.309 "name": "pt2", 00:18:46.309 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:46.309 "is_configured": true, 00:18:46.309 "data_offset": 256, 00:18:46.309 "data_size": 7936 00:18:46.309 } 00:18:46.309 ] 00:18:46.309 } 00:18:46.309 } 00:18:46.309 }' 00:18:46.309 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:46.309 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:46.309 pt2' 00:18:46.309 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:46.309 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:46.309 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:46.309 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:46.309 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.309 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.309 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:46.309 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.309 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:46.309 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:46.309 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:46.309 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:46.309 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.309 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.309 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:46.309 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.309 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:46.309 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:46.309 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:46.309 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.309 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.309 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:46.570 [2024-10-09 03:21:29.609980] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9f1fd400-f1b5-47fe-8f1b-9d040bfba22c 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 9f1fd400-f1b5-47fe-8f1b-9d040bfba22c ']' 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.570 [2024-10-09 03:21:29.657656] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:46.570 [2024-10-09 03:21:29.657717] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:46.570 [2024-10-09 03:21:29.657794] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:46.570 [2024-10-09 03:21:29.657852] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:46.570 [2024-10-09 03:21:29.657865] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.570 [2024-10-09 03:21:29.797450] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:46.570 [2024-10-09 03:21:29.799396] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:46.570 [2024-10-09 03:21:29.799457] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:46.570 [2024-10-09 03:21:29.799504] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:46.570 [2024-10-09 03:21:29.799518] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:46.570 [2024-10-09 03:21:29.799526] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:46.570 request: 00:18:46.570 { 00:18:46.570 "name": "raid_bdev1", 00:18:46.570 "raid_level": "raid1", 00:18:46.570 "base_bdevs": [ 00:18:46.570 "malloc1", 00:18:46.570 "malloc2" 00:18:46.570 ], 00:18:46.570 "superblock": false, 00:18:46.570 "method": "bdev_raid_create", 00:18:46.570 "req_id": 1 00:18:46.570 } 00:18:46.570 Got JSON-RPC error response 00:18:46.570 response: 00:18:46.570 { 00:18:46.570 "code": -17, 00:18:46.570 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:46.570 } 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@653 -- # es=1 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.570 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.571 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:46.571 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:46.571 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:46.571 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.571 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.571 [2024-10-09 03:21:29.857304] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:46.571 [2024-10-09 03:21:29.857387] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:46.571 [2024-10-09 03:21:29.857416] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:46.571 [2024-10-09 03:21:29.857441] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:46.571 [2024-10-09 03:21:29.859507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:46.571 [2024-10-09 03:21:29.859574] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:46.571 [2024-10-09 03:21:29.859629] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:46.571 [2024-10-09 03:21:29.859696] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:46.571 pt1 00:18:46.571 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.571 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:18:46.571 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:46.571 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:46.571 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:46.571 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:46.571 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:46.571 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:46.571 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:46.571 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:46.571 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:46.571 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:46.571 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.571 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.830 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:46.831 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.831 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.831 "name": "raid_bdev1", 00:18:46.831 "uuid": "9f1fd400-f1b5-47fe-8f1b-9d040bfba22c", 00:18:46.831 "strip_size_kb": 0, 00:18:46.831 "state": "configuring", 00:18:46.831 "raid_level": "raid1", 00:18:46.831 "superblock": true, 00:18:46.831 "num_base_bdevs": 2, 00:18:46.831 "num_base_bdevs_discovered": 1, 00:18:46.831 "num_base_bdevs_operational": 2, 00:18:46.831 "base_bdevs_list": [ 00:18:46.831 { 00:18:46.831 "name": "pt1", 00:18:46.831 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:46.831 "is_configured": true, 00:18:46.831 "data_offset": 256, 00:18:46.831 "data_size": 7936 00:18:46.831 }, 00:18:46.831 { 00:18:46.831 "name": null, 00:18:46.831 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:46.831 "is_configured": false, 00:18:46.831 "data_offset": 256, 00:18:46.831 "data_size": 7936 00:18:46.831 } 00:18:46.831 ] 00:18:46.831 }' 00:18:46.831 03:21:29 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.831 03:21:29 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.090 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:18:47.090 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:47.090 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:47.090 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:47.090 03:21:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.090 03:21:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.090 [2024-10-09 03:21:30.244878] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:47.090 [2024-10-09 03:21:30.244930] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:47.090 [2024-10-09 03:21:30.244947] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:47.090 [2024-10-09 03:21:30.244957] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:47.090 [2024-10-09 03:21:30.245123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:47.090 [2024-10-09 03:21:30.245138] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:47.090 [2024-10-09 03:21:30.245177] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:47.090 [2024-10-09 03:21:30.245196] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:47.090 [2024-10-09 03:21:30.245285] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:47.090 [2024-10-09 03:21:30.245295] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:47.090 [2024-10-09 03:21:30.245357] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:47.090 [2024-10-09 03:21:30.245466] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:47.090 [2024-10-09 03:21:30.245473] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:47.090 [2024-10-09 03:21:30.245563] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:47.090 pt2 00:18:47.090 03:21:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.090 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:47.090 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:47.090 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:47.090 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:47.090 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.090 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:47.090 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:47.090 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:47.090 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.091 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.091 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.091 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.091 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.091 03:21:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.091 03:21:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.091 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.091 03:21:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.091 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.091 "name": "raid_bdev1", 00:18:47.091 "uuid": "9f1fd400-f1b5-47fe-8f1b-9d040bfba22c", 00:18:47.091 "strip_size_kb": 0, 00:18:47.091 "state": "online", 00:18:47.091 "raid_level": "raid1", 00:18:47.091 "superblock": true, 00:18:47.091 "num_base_bdevs": 2, 00:18:47.091 "num_base_bdevs_discovered": 2, 00:18:47.091 "num_base_bdevs_operational": 2, 00:18:47.091 "base_bdevs_list": [ 00:18:47.091 { 00:18:47.091 "name": "pt1", 00:18:47.091 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:47.091 "is_configured": true, 00:18:47.091 "data_offset": 256, 00:18:47.091 "data_size": 7936 00:18:47.091 }, 00:18:47.091 { 00:18:47.091 "name": "pt2", 00:18:47.091 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:47.091 "is_configured": true, 00:18:47.091 "data_offset": 256, 00:18:47.091 "data_size": 7936 00:18:47.091 } 00:18:47.091 ] 00:18:47.091 }' 00:18:47.091 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.091 03:21:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.660 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:47.660 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:47.660 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:47.660 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:47.660 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.661 [2024-10-09 03:21:30.668833] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:47.661 "name": "raid_bdev1", 00:18:47.661 "aliases": [ 00:18:47.661 "9f1fd400-f1b5-47fe-8f1b-9d040bfba22c" 00:18:47.661 ], 00:18:47.661 "product_name": "Raid Volume", 00:18:47.661 "block_size": 4096, 00:18:47.661 "num_blocks": 7936, 00:18:47.661 "uuid": "9f1fd400-f1b5-47fe-8f1b-9d040bfba22c", 00:18:47.661 "md_size": 32, 00:18:47.661 "md_interleave": false, 00:18:47.661 "dif_type": 0, 00:18:47.661 "assigned_rate_limits": { 00:18:47.661 "rw_ios_per_sec": 0, 00:18:47.661 "rw_mbytes_per_sec": 0, 00:18:47.661 "r_mbytes_per_sec": 0, 00:18:47.661 "w_mbytes_per_sec": 0 00:18:47.661 }, 00:18:47.661 "claimed": false, 00:18:47.661 "zoned": false, 00:18:47.661 "supported_io_types": { 00:18:47.661 "read": true, 00:18:47.661 "write": true, 00:18:47.661 "unmap": false, 00:18:47.661 "flush": false, 00:18:47.661 "reset": true, 00:18:47.661 "nvme_admin": false, 00:18:47.661 "nvme_io": false, 00:18:47.661 "nvme_io_md": false, 00:18:47.661 "write_zeroes": true, 00:18:47.661 "zcopy": false, 00:18:47.661 "get_zone_info": false, 00:18:47.661 "zone_management": false, 00:18:47.661 "zone_append": false, 00:18:47.661 "compare": false, 00:18:47.661 "compare_and_write": false, 00:18:47.661 "abort": false, 00:18:47.661 "seek_hole": false, 00:18:47.661 "seek_data": false, 00:18:47.661 "copy": false, 00:18:47.661 "nvme_iov_md": false 00:18:47.661 }, 00:18:47.661 "memory_domains": [ 00:18:47.661 { 00:18:47.661 "dma_device_id": "system", 00:18:47.661 "dma_device_type": 1 00:18:47.661 }, 00:18:47.661 { 00:18:47.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:47.661 "dma_device_type": 2 00:18:47.661 }, 00:18:47.661 { 00:18:47.661 "dma_device_id": "system", 00:18:47.661 "dma_device_type": 1 00:18:47.661 }, 00:18:47.661 { 00:18:47.661 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:47.661 "dma_device_type": 2 00:18:47.661 } 00:18:47.661 ], 00:18:47.661 "driver_specific": { 00:18:47.661 "raid": { 00:18:47.661 "uuid": "9f1fd400-f1b5-47fe-8f1b-9d040bfba22c", 00:18:47.661 "strip_size_kb": 0, 00:18:47.661 "state": "online", 00:18:47.661 "raid_level": "raid1", 00:18:47.661 "superblock": true, 00:18:47.661 "num_base_bdevs": 2, 00:18:47.661 "num_base_bdevs_discovered": 2, 00:18:47.661 "num_base_bdevs_operational": 2, 00:18:47.661 "base_bdevs_list": [ 00:18:47.661 { 00:18:47.661 "name": "pt1", 00:18:47.661 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:47.661 "is_configured": true, 00:18:47.661 "data_offset": 256, 00:18:47.661 "data_size": 7936 00:18:47.661 }, 00:18:47.661 { 00:18:47.661 "name": "pt2", 00:18:47.661 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:47.661 "is_configured": true, 00:18:47.661 "data_offset": 256, 00:18:47.661 "data_size": 7936 00:18:47.661 } 00:18:47.661 ] 00:18:47.661 } 00:18:47.661 } 00:18:47.661 }' 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:47.661 pt2' 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.661 [2024-10-09 03:21:30.904393] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 9f1fd400-f1b5-47fe-8f1b-9d040bfba22c '!=' 9f1fd400-f1b5-47fe-8f1b-9d040bfba22c ']' 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.661 [2024-10-09 03:21:30.952145] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.661 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.921 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.921 03:21:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.921 03:21:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.921 03:21:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:47.921 03:21:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.921 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.921 "name": "raid_bdev1", 00:18:47.921 "uuid": "9f1fd400-f1b5-47fe-8f1b-9d040bfba22c", 00:18:47.921 "strip_size_kb": 0, 00:18:47.921 "state": "online", 00:18:47.921 "raid_level": "raid1", 00:18:47.921 "superblock": true, 00:18:47.921 "num_base_bdevs": 2, 00:18:47.921 "num_base_bdevs_discovered": 1, 00:18:47.921 "num_base_bdevs_operational": 1, 00:18:47.921 "base_bdevs_list": [ 00:18:47.921 { 00:18:47.921 "name": null, 00:18:47.921 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.921 "is_configured": false, 00:18:47.921 "data_offset": 0, 00:18:47.921 "data_size": 7936 00:18:47.921 }, 00:18:47.921 { 00:18:47.921 "name": "pt2", 00:18:47.921 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:47.921 "is_configured": true, 00:18:47.921 "data_offset": 256, 00:18:47.921 "data_size": 7936 00:18:47.921 } 00:18:47.921 ] 00:18:47.921 }' 00:18:47.921 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.921 03:21:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.182 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:48.182 03:21:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.182 03:21:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.182 [2024-10-09 03:21:31.399459] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:48.182 [2024-10-09 03:21:31.399521] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:48.182 [2024-10-09 03:21:31.399572] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:48.182 [2024-10-09 03:21:31.399605] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:48.182 [2024-10-09 03:21:31.399615] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:48.182 03:21:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.182 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.182 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:48.182 03:21:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.182 03:21:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.182 03:21:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.182 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:48.182 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:48.182 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:48.182 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:48.182 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:48.182 03:21:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.182 03:21:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.182 03:21:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.182 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:48.182 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:48.182 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:48.182 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:48.182 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:18:48.182 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:48.182 03:21:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.182 03:21:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.182 [2024-10-09 03:21:31.471337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:48.182 [2024-10-09 03:21:31.471420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.182 [2024-10-09 03:21:31.471447] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:18:48.182 [2024-10-09 03:21:31.471475] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.182 [2024-10-09 03:21:31.473558] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.182 [2024-10-09 03:21:31.473626] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:48.182 [2024-10-09 03:21:31.473679] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:48.182 [2024-10-09 03:21:31.473733] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:48.182 [2024-10-09 03:21:31.473827] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:48.182 [2024-10-09 03:21:31.473886] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:48.182 [2024-10-09 03:21:31.473982] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:48.182 [2024-10-09 03:21:31.474129] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:48.182 [2024-10-09 03:21:31.474166] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:48.182 [2024-10-09 03:21:31.474282] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:48.182 pt2 00:18:48.182 03:21:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.182 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:48.182 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:48.182 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:48.182 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:48.182 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:48.182 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:48.182 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:48.182 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:48.182 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:48.182 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:48.442 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.442 03:21:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.442 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.442 03:21:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.442 03:21:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.442 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:48.442 "name": "raid_bdev1", 00:18:48.442 "uuid": "9f1fd400-f1b5-47fe-8f1b-9d040bfba22c", 00:18:48.442 "strip_size_kb": 0, 00:18:48.442 "state": "online", 00:18:48.442 "raid_level": "raid1", 00:18:48.442 "superblock": true, 00:18:48.442 "num_base_bdevs": 2, 00:18:48.442 "num_base_bdevs_discovered": 1, 00:18:48.442 "num_base_bdevs_operational": 1, 00:18:48.442 "base_bdevs_list": [ 00:18:48.442 { 00:18:48.442 "name": null, 00:18:48.442 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.442 "is_configured": false, 00:18:48.443 "data_offset": 256, 00:18:48.443 "data_size": 7936 00:18:48.443 }, 00:18:48.443 { 00:18:48.443 "name": "pt2", 00:18:48.443 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:48.443 "is_configured": true, 00:18:48.443 "data_offset": 256, 00:18:48.443 "data_size": 7936 00:18:48.443 } 00:18:48.443 ] 00:18:48.443 }' 00:18:48.443 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:48.443 03:21:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.703 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:48.703 03:21:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.703 03:21:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.703 [2024-10-09 03:21:31.926546] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:48.703 [2024-10-09 03:21:31.926567] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:48.703 [2024-10-09 03:21:31.926608] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:48.703 [2024-10-09 03:21:31.926639] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:48.703 [2024-10-09 03:21:31.926646] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:48.703 03:21:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.703 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.703 03:21:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.703 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:48.703 03:21:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.703 03:21:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.703 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:48.703 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:48.703 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:18:48.703 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:48.703 03:21:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.703 03:21:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.703 [2024-10-09 03:21:31.986471] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:48.703 [2024-10-09 03:21:31.986553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:48.703 [2024-10-09 03:21:31.986584] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:18:48.703 [2024-10-09 03:21:31.986607] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:48.703 [2024-10-09 03:21:31.988668] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:48.703 [2024-10-09 03:21:31.988734] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:48.703 [2024-10-09 03:21:31.988802] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:48.703 [2024-10-09 03:21:31.988860] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:48.703 [2024-10-09 03:21:31.988990] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:48.703 [2024-10-09 03:21:31.989041] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:48.703 [2024-10-09 03:21:31.989076] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:48.703 [2024-10-09 03:21:31.989190] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:48.703 [2024-10-09 03:21:31.989282] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:48.703 [2024-10-09 03:21:31.989316] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:48.703 [2024-10-09 03:21:31.989389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:48.703 [2024-10-09 03:21:31.989515] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:48.703 [2024-10-09 03:21:31.989549] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:48.703 [2024-10-09 03:21:31.989674] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:48.703 pt1 00:18:48.703 03:21:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.703 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:18:48.703 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:48.703 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:48.703 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:48.703 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:48.703 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:48.703 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:48.703 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:48.703 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:48.703 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:48.703 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:48.703 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:48.703 03:21:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.703 03:21:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:48.703 03:21:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:48.962 03:21:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.962 03:21:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:48.962 "name": "raid_bdev1", 00:18:48.962 "uuid": "9f1fd400-f1b5-47fe-8f1b-9d040bfba22c", 00:18:48.962 "strip_size_kb": 0, 00:18:48.962 "state": "online", 00:18:48.962 "raid_level": "raid1", 00:18:48.962 "superblock": true, 00:18:48.962 "num_base_bdevs": 2, 00:18:48.962 "num_base_bdevs_discovered": 1, 00:18:48.962 "num_base_bdevs_operational": 1, 00:18:48.962 "base_bdevs_list": [ 00:18:48.962 { 00:18:48.962 "name": null, 00:18:48.962 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:48.962 "is_configured": false, 00:18:48.962 "data_offset": 256, 00:18:48.962 "data_size": 7936 00:18:48.962 }, 00:18:48.962 { 00:18:48.962 "name": "pt2", 00:18:48.962 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:48.962 "is_configured": true, 00:18:48.963 "data_offset": 256, 00:18:48.963 "data_size": 7936 00:18:48.963 } 00:18:48.963 ] 00:18:48.963 }' 00:18:48.963 03:21:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:48.963 03:21:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.222 03:21:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:49.222 03:21:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.222 03:21:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.222 03:21:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:49.222 03:21:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.222 03:21:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:49.222 03:21:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:49.222 03:21:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.222 03:21:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:49.222 03:21:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:49.222 [2024-10-09 03:21:32.501764] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:49.222 03:21:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.482 03:21:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 9f1fd400-f1b5-47fe-8f1b-9d040bfba22c '!=' 9f1fd400-f1b5-47fe-8f1b-9d040bfba22c ']' 00:18:49.482 03:21:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87740 00:18:49.482 03:21:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@950 -- # '[' -z 87740 ']' 00:18:49.482 03:21:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # kill -0 87740 00:18:49.482 03:21:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # uname 00:18:49.482 03:21:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:49.482 03:21:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 87740 00:18:49.482 killing process with pid 87740 00:18:49.482 03:21:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:49.482 03:21:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:49.482 03:21:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 87740' 00:18:49.482 03:21:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@969 -- # kill 87740 00:18:49.482 [2024-10-09 03:21:32.588159] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:49.482 [2024-10-09 03:21:32.588218] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:49.482 [2024-10-09 03:21:32.588248] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:49.482 [2024-10-09 03:21:32.588261] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:49.482 03:21:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@974 -- # wait 87740 00:18:49.742 [2024-10-09 03:21:32.813143] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:51.125 03:21:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:18:51.125 00:18:51.125 real 0m6.229s 00:18:51.125 user 0m9.171s 00:18:51.125 sys 0m1.196s 00:18:51.125 03:21:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:51.125 03:21:34 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.125 ************************************ 00:18:51.125 END TEST raid_superblock_test_md_separate 00:18:51.125 ************************************ 00:18:51.125 03:21:34 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:18:51.125 03:21:34 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:18:51.125 03:21:34 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:18:51.125 03:21:34 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:51.125 03:21:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:51.125 ************************************ 00:18:51.125 START TEST raid_rebuild_test_sb_md_separate 00:18:51.125 ************************************ 00:18:51.125 03:21:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false true 00:18:51.125 03:21:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:18:51.125 03:21:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:18:51.125 03:21:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:18:51.125 03:21:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:51.125 03:21:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:51.125 03:21:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:51.126 03:21:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:51.126 03:21:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:51.126 03:21:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:51.126 03:21:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:51.126 03:21:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:51.126 03:21:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:51.126 03:21:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:51.126 03:21:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:18:51.126 03:21:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:51.126 03:21:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:51.126 03:21:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:51.126 03:21:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:51.126 03:21:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:51.126 03:21:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:51.126 03:21:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:18:51.126 03:21:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:18:51.126 03:21:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:18:51.126 03:21:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:18:51.126 03:21:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88063 00:18:51.126 03:21:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88063 00:18:51.126 03:21:34 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:51.126 03:21:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@831 -- # '[' -z 88063 ']' 00:18:51.126 03:21:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:51.126 03:21:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:51.126 03:21:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:51.126 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:51.126 03:21:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:51.126 03:21:34 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.126 [2024-10-09 03:21:34.303183] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:18:51.126 [2024-10-09 03:21:34.303410] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.ealI/O size of 3145728 is greater than zero copy threshold (65536). 00:18:51.126 Zero copy mechanism will not be used. 00:18:51.126 :6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88063 ] 00:18:51.386 [2024-10-09 03:21:34.468333] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.645 [2024-10-09 03:21:34.710307] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.646 [2024-10-09 03:21:34.940244] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:51.646 [2024-10-09 03:21:34.940373] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:51.906 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:51.906 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # return 0 00:18:51.906 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:51.906 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:18:51.906 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.906 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.906 BaseBdev1_malloc 00:18:51.906 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.906 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:51.906 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.906 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:51.906 [2024-10-09 03:21:35.159142] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:51.906 [2024-10-09 03:21:35.159278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:51.906 [2024-10-09 03:21:35.159323] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:51.906 [2024-10-09 03:21:35.159353] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:51.906 [2024-10-09 03:21:35.161572] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:51.906 [2024-10-09 03:21:35.161650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:51.906 BaseBdev1 00:18:51.906 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.906 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:51.906 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:18:51.906 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.906 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.166 BaseBdev2_malloc 00:18:52.166 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.166 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:52.166 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.166 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.166 [2024-10-09 03:21:35.250524] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:52.166 [2024-10-09 03:21:35.250588] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:52.166 [2024-10-09 03:21:35.250607] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:52.166 [2024-10-09 03:21:35.250619] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:52.166 [2024-10-09 03:21:35.252686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:52.166 [2024-10-09 03:21:35.252724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:52.166 BaseBdev2 00:18:52.166 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.166 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:18:52.166 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.166 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.166 spare_malloc 00:18:52.166 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.166 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:52.166 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.166 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.166 spare_delay 00:18:52.166 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.166 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:52.166 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.166 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.166 [2024-10-09 03:21:35.325070] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:52.166 [2024-10-09 03:21:35.325133] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:52.166 [2024-10-09 03:21:35.325153] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:18:52.166 [2024-10-09 03:21:35.325165] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:52.166 [2024-10-09 03:21:35.327226] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:52.166 [2024-10-09 03:21:35.327266] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:52.166 spare 00:18:52.166 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.166 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:18:52.166 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.166 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.166 [2024-10-09 03:21:35.337102] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:52.166 [2024-10-09 03:21:35.339067] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:52.166 [2024-10-09 03:21:35.339257] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:52.166 [2024-10-09 03:21:35.339271] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:18:52.166 [2024-10-09 03:21:35.339340] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:18:52.166 [2024-10-09 03:21:35.339469] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:52.166 [2024-10-09 03:21:35.339477] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:52.166 [2024-10-09 03:21:35.339567] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:52.166 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.166 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:52.166 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:52.166 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:52.166 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:52.166 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:52.166 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:52.166 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:52.166 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:52.166 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:52.166 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:52.166 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.166 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.166 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.166 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.166 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.166 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:52.166 "name": "raid_bdev1", 00:18:52.166 "uuid": "4f64f513-0242-4fa5-bdad-652e622e9ef4", 00:18:52.166 "strip_size_kb": 0, 00:18:52.166 "state": "online", 00:18:52.166 "raid_level": "raid1", 00:18:52.166 "superblock": true, 00:18:52.166 "num_base_bdevs": 2, 00:18:52.166 "num_base_bdevs_discovered": 2, 00:18:52.166 "num_base_bdevs_operational": 2, 00:18:52.166 "base_bdevs_list": [ 00:18:52.166 { 00:18:52.166 "name": "BaseBdev1", 00:18:52.166 "uuid": "272ff549-b336-53c0-8f91-5fb773260355", 00:18:52.166 "is_configured": true, 00:18:52.167 "data_offset": 256, 00:18:52.167 "data_size": 7936 00:18:52.167 }, 00:18:52.167 { 00:18:52.167 "name": "BaseBdev2", 00:18:52.167 "uuid": "baab8053-cceb-5d16-b426-19f1e406df0f", 00:18:52.167 "is_configured": true, 00:18:52.167 "data_offset": 256, 00:18:52.167 "data_size": 7936 00:18:52.167 } 00:18:52.167 ] 00:18:52.167 }' 00:18:52.167 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:52.167 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.736 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:52.736 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:52.736 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.736 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.736 [2024-10-09 03:21:35.796703] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:52.736 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.736 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:18:52.736 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.736 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.736 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:52.736 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:52.736 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.736 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:18:52.736 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:52.736 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:52.736 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:52.736 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:52.736 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:52.736 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:52.736 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:52.736 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:52.736 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:52.736 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:18:52.736 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:52.736 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:52.736 03:21:35 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:52.736 [2024-10-09 03:21:36.028102] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:52.995 /dev/nbd0 00:18:52.995 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:52.995 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:52.995 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:18:52.995 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:18:52.995 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:18:52.995 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:18:52.995 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:18:52.995 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:18:52.995 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:18:52.995 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:18:52.995 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:52.995 1+0 records in 00:18:52.995 1+0 records out 00:18:52.995 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00052327 s, 7.8 MB/s 00:18:52.995 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:52.995 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:18:52.995 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:52.995 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:18:52.995 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:18:52.995 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:52.995 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:52.995 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:18:52.995 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:18:52.995 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:18:53.564 7936+0 records in 00:18:53.564 7936+0 records out 00:18:53.564 32505856 bytes (33 MB, 31 MiB) copied, 0.594489 s, 54.7 MB/s 00:18:53.564 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:53.564 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:53.564 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:53.564 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:53.564 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:18:53.564 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:53.564 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:53.824 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:53.824 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:53.824 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:53.824 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:53.824 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:53.824 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:53.824 [2024-10-09 03:21:36.890828] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:53.824 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:18:53.824 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:18:53.824 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:53.824 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.825 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.825 [2024-10-09 03:21:36.906885] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:53.825 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.825 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:53.825 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:53.825 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:53.825 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:53.825 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:53.825 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:53.825 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:53.825 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:53.825 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:53.825 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:53.825 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.825 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.825 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:53.825 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:53.825 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:53.825 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:53.825 "name": "raid_bdev1", 00:18:53.825 "uuid": "4f64f513-0242-4fa5-bdad-652e622e9ef4", 00:18:53.825 "strip_size_kb": 0, 00:18:53.825 "state": "online", 00:18:53.825 "raid_level": "raid1", 00:18:53.825 "superblock": true, 00:18:53.825 "num_base_bdevs": 2, 00:18:53.825 "num_base_bdevs_discovered": 1, 00:18:53.825 "num_base_bdevs_operational": 1, 00:18:53.825 "base_bdevs_list": [ 00:18:53.825 { 00:18:53.825 "name": null, 00:18:53.825 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:53.825 "is_configured": false, 00:18:53.825 "data_offset": 0, 00:18:53.825 "data_size": 7936 00:18:53.825 }, 00:18:53.825 { 00:18:53.825 "name": "BaseBdev2", 00:18:53.825 "uuid": "baab8053-cceb-5d16-b426-19f1e406df0f", 00:18:53.825 "is_configured": true, 00:18:53.825 "data_offset": 256, 00:18:53.825 "data_size": 7936 00:18:53.825 } 00:18:53.825 ] 00:18:53.825 }' 00:18:53.825 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:53.825 03:21:36 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.085 03:21:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:54.085 03:21:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.085 03:21:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:54.085 [2024-10-09 03:21:37.306168] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:54.085 [2024-10-09 03:21:37.319937] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:18:54.085 03:21:37 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.085 03:21:37 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:54.085 [2024-10-09 03:21:37.321944] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:55.035 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:55.035 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:55.035 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:55.035 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:55.035 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:55.310 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.310 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.310 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.310 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.310 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.310 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:55.310 "name": "raid_bdev1", 00:18:55.310 "uuid": "4f64f513-0242-4fa5-bdad-652e622e9ef4", 00:18:55.310 "strip_size_kb": 0, 00:18:55.310 "state": "online", 00:18:55.310 "raid_level": "raid1", 00:18:55.310 "superblock": true, 00:18:55.310 "num_base_bdevs": 2, 00:18:55.310 "num_base_bdevs_discovered": 2, 00:18:55.310 "num_base_bdevs_operational": 2, 00:18:55.310 "process": { 00:18:55.310 "type": "rebuild", 00:18:55.310 "target": "spare", 00:18:55.310 "progress": { 00:18:55.310 "blocks": 2560, 00:18:55.310 "percent": 32 00:18:55.310 } 00:18:55.310 }, 00:18:55.310 "base_bdevs_list": [ 00:18:55.310 { 00:18:55.310 "name": "spare", 00:18:55.310 "uuid": "0f73760c-0834-5787-bee7-1f0cfb459064", 00:18:55.310 "is_configured": true, 00:18:55.310 "data_offset": 256, 00:18:55.310 "data_size": 7936 00:18:55.310 }, 00:18:55.310 { 00:18:55.310 "name": "BaseBdev2", 00:18:55.310 "uuid": "baab8053-cceb-5d16-b426-19f1e406df0f", 00:18:55.310 "is_configured": true, 00:18:55.310 "data_offset": 256, 00:18:55.310 "data_size": 7936 00:18:55.310 } 00:18:55.310 ] 00:18:55.310 }' 00:18:55.310 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:55.310 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:55.310 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:55.310 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:55.310 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:55.310 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.310 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.310 [2024-10-09 03:21:38.458738] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:55.310 [2024-10-09 03:21:38.530613] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:55.310 [2024-10-09 03:21:38.530672] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:55.310 [2024-10-09 03:21:38.530687] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:55.310 [2024-10-09 03:21:38.530697] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:55.310 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.310 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:55.310 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:55.310 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:55.310 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:55.310 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:55.310 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:18:55.310 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:55.310 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:55.310 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:55.310 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:55.310 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.310 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.310 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.310 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.310 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.310 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:55.310 "name": "raid_bdev1", 00:18:55.310 "uuid": "4f64f513-0242-4fa5-bdad-652e622e9ef4", 00:18:55.310 "strip_size_kb": 0, 00:18:55.310 "state": "online", 00:18:55.310 "raid_level": "raid1", 00:18:55.310 "superblock": true, 00:18:55.310 "num_base_bdevs": 2, 00:18:55.310 "num_base_bdevs_discovered": 1, 00:18:55.310 "num_base_bdevs_operational": 1, 00:18:55.310 "base_bdevs_list": [ 00:18:55.310 { 00:18:55.310 "name": null, 00:18:55.311 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.311 "is_configured": false, 00:18:55.311 "data_offset": 0, 00:18:55.311 "data_size": 7936 00:18:55.311 }, 00:18:55.311 { 00:18:55.311 "name": "BaseBdev2", 00:18:55.311 "uuid": "baab8053-cceb-5d16-b426-19f1e406df0f", 00:18:55.311 "is_configured": true, 00:18:55.311 "data_offset": 256, 00:18:55.311 "data_size": 7936 00:18:55.311 } 00:18:55.311 ] 00:18:55.311 }' 00:18:55.311 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:55.311 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.880 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:55.880 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:55.880 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:55.880 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:55.880 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:55.880 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:55.880 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:55.880 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.880 03:21:38 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.880 03:21:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.880 03:21:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:55.880 "name": "raid_bdev1", 00:18:55.880 "uuid": "4f64f513-0242-4fa5-bdad-652e622e9ef4", 00:18:55.880 "strip_size_kb": 0, 00:18:55.880 "state": "online", 00:18:55.880 "raid_level": "raid1", 00:18:55.880 "superblock": true, 00:18:55.880 "num_base_bdevs": 2, 00:18:55.880 "num_base_bdevs_discovered": 1, 00:18:55.880 "num_base_bdevs_operational": 1, 00:18:55.880 "base_bdevs_list": [ 00:18:55.880 { 00:18:55.880 "name": null, 00:18:55.880 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:55.880 "is_configured": false, 00:18:55.880 "data_offset": 0, 00:18:55.880 "data_size": 7936 00:18:55.880 }, 00:18:55.880 { 00:18:55.880 "name": "BaseBdev2", 00:18:55.880 "uuid": "baab8053-cceb-5d16-b426-19f1e406df0f", 00:18:55.880 "is_configured": true, 00:18:55.880 "data_offset": 256, 00:18:55.880 "data_size": 7936 00:18:55.880 } 00:18:55.880 ] 00:18:55.880 }' 00:18:55.880 03:21:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:55.880 03:21:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:55.880 03:21:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:55.880 03:21:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:55.880 03:21:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:55.880 03:21:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.880 03:21:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:55.880 [2024-10-09 03:21:39.133339] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:55.880 [2024-10-09 03:21:39.145743] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:18:55.880 03:21:39 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.880 03:21:39 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:55.880 [2024-10-09 03:21:39.147815] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:57.264 "name": "raid_bdev1", 00:18:57.264 "uuid": "4f64f513-0242-4fa5-bdad-652e622e9ef4", 00:18:57.264 "strip_size_kb": 0, 00:18:57.264 "state": "online", 00:18:57.264 "raid_level": "raid1", 00:18:57.264 "superblock": true, 00:18:57.264 "num_base_bdevs": 2, 00:18:57.264 "num_base_bdevs_discovered": 2, 00:18:57.264 "num_base_bdevs_operational": 2, 00:18:57.264 "process": { 00:18:57.264 "type": "rebuild", 00:18:57.264 "target": "spare", 00:18:57.264 "progress": { 00:18:57.264 "blocks": 2560, 00:18:57.264 "percent": 32 00:18:57.264 } 00:18:57.264 }, 00:18:57.264 "base_bdevs_list": [ 00:18:57.264 { 00:18:57.264 "name": "spare", 00:18:57.264 "uuid": "0f73760c-0834-5787-bee7-1f0cfb459064", 00:18:57.264 "is_configured": true, 00:18:57.264 "data_offset": 256, 00:18:57.264 "data_size": 7936 00:18:57.264 }, 00:18:57.264 { 00:18:57.264 "name": "BaseBdev2", 00:18:57.264 "uuid": "baab8053-cceb-5d16-b426-19f1e406df0f", 00:18:57.264 "is_configured": true, 00:18:57.264 "data_offset": 256, 00:18:57.264 "data_size": 7936 00:18:57.264 } 00:18:57.264 ] 00:18:57.264 }' 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:18:57.264 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=727 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:57.264 "name": "raid_bdev1", 00:18:57.264 "uuid": "4f64f513-0242-4fa5-bdad-652e622e9ef4", 00:18:57.264 "strip_size_kb": 0, 00:18:57.264 "state": "online", 00:18:57.264 "raid_level": "raid1", 00:18:57.264 "superblock": true, 00:18:57.264 "num_base_bdevs": 2, 00:18:57.264 "num_base_bdevs_discovered": 2, 00:18:57.264 "num_base_bdevs_operational": 2, 00:18:57.264 "process": { 00:18:57.264 "type": "rebuild", 00:18:57.264 "target": "spare", 00:18:57.264 "progress": { 00:18:57.264 "blocks": 2816, 00:18:57.264 "percent": 35 00:18:57.264 } 00:18:57.264 }, 00:18:57.264 "base_bdevs_list": [ 00:18:57.264 { 00:18:57.264 "name": "spare", 00:18:57.264 "uuid": "0f73760c-0834-5787-bee7-1f0cfb459064", 00:18:57.264 "is_configured": true, 00:18:57.264 "data_offset": 256, 00:18:57.264 "data_size": 7936 00:18:57.264 }, 00:18:57.264 { 00:18:57.264 "name": "BaseBdev2", 00:18:57.264 "uuid": "baab8053-cceb-5d16-b426-19f1e406df0f", 00:18:57.264 "is_configured": true, 00:18:57.264 "data_offset": 256, 00:18:57.264 "data_size": 7936 00:18:57.264 } 00:18:57.264 ] 00:18:57.264 }' 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:57.264 03:21:40 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:58.205 03:21:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:58.205 03:21:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:58.205 03:21:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:58.205 03:21:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:58.205 03:21:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:58.205 03:21:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:58.205 03:21:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.205 03:21:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.205 03:21:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.205 03:21:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:58.205 03:21:41 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.205 03:21:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:58.205 "name": "raid_bdev1", 00:18:58.205 "uuid": "4f64f513-0242-4fa5-bdad-652e622e9ef4", 00:18:58.205 "strip_size_kb": 0, 00:18:58.205 "state": "online", 00:18:58.205 "raid_level": "raid1", 00:18:58.205 "superblock": true, 00:18:58.205 "num_base_bdevs": 2, 00:18:58.205 "num_base_bdevs_discovered": 2, 00:18:58.205 "num_base_bdevs_operational": 2, 00:18:58.205 "process": { 00:18:58.205 "type": "rebuild", 00:18:58.205 "target": "spare", 00:18:58.205 "progress": { 00:18:58.205 "blocks": 5632, 00:18:58.205 "percent": 70 00:18:58.205 } 00:18:58.205 }, 00:18:58.205 "base_bdevs_list": [ 00:18:58.205 { 00:18:58.205 "name": "spare", 00:18:58.205 "uuid": "0f73760c-0834-5787-bee7-1f0cfb459064", 00:18:58.205 "is_configured": true, 00:18:58.205 "data_offset": 256, 00:18:58.205 "data_size": 7936 00:18:58.205 }, 00:18:58.205 { 00:18:58.205 "name": "BaseBdev2", 00:18:58.205 "uuid": "baab8053-cceb-5d16-b426-19f1e406df0f", 00:18:58.205 "is_configured": true, 00:18:58.205 "data_offset": 256, 00:18:58.205 "data_size": 7936 00:18:58.205 } 00:18:58.205 ] 00:18:58.205 }' 00:18:58.205 03:21:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:58.205 03:21:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:58.205 03:21:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:58.465 03:21:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:58.465 03:21:41 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:59.034 [2024-10-09 03:21:42.267240] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:59.034 [2024-10-09 03:21:42.267318] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:59.034 [2024-10-09 03:21:42.267425] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:59.294 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:59.294 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:59.294 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:59.294 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:59.294 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:59.294 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:59.294 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.294 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.294 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.294 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.294 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.294 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:59.294 "name": "raid_bdev1", 00:18:59.294 "uuid": "4f64f513-0242-4fa5-bdad-652e622e9ef4", 00:18:59.294 "strip_size_kb": 0, 00:18:59.294 "state": "online", 00:18:59.294 "raid_level": "raid1", 00:18:59.294 "superblock": true, 00:18:59.294 "num_base_bdevs": 2, 00:18:59.294 "num_base_bdevs_discovered": 2, 00:18:59.294 "num_base_bdevs_operational": 2, 00:18:59.294 "base_bdevs_list": [ 00:18:59.294 { 00:18:59.294 "name": "spare", 00:18:59.294 "uuid": "0f73760c-0834-5787-bee7-1f0cfb459064", 00:18:59.294 "is_configured": true, 00:18:59.294 "data_offset": 256, 00:18:59.294 "data_size": 7936 00:18:59.294 }, 00:18:59.294 { 00:18:59.294 "name": "BaseBdev2", 00:18:59.294 "uuid": "baab8053-cceb-5d16-b426-19f1e406df0f", 00:18:59.294 "is_configured": true, 00:18:59.294 "data_offset": 256, 00:18:59.294 "data_size": 7936 00:18:59.294 } 00:18:59.294 ] 00:18:59.294 }' 00:18:59.295 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:59.554 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:59.554 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:59.554 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:18:59.554 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:18:59.554 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:59.554 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:59.554 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:59.554 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:59.554 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:59.554 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.554 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.554 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.554 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.554 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.554 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:59.554 "name": "raid_bdev1", 00:18:59.554 "uuid": "4f64f513-0242-4fa5-bdad-652e622e9ef4", 00:18:59.554 "strip_size_kb": 0, 00:18:59.554 "state": "online", 00:18:59.554 "raid_level": "raid1", 00:18:59.554 "superblock": true, 00:18:59.554 "num_base_bdevs": 2, 00:18:59.554 "num_base_bdevs_discovered": 2, 00:18:59.554 "num_base_bdevs_operational": 2, 00:18:59.554 "base_bdevs_list": [ 00:18:59.554 { 00:18:59.554 "name": "spare", 00:18:59.554 "uuid": "0f73760c-0834-5787-bee7-1f0cfb459064", 00:18:59.554 "is_configured": true, 00:18:59.554 "data_offset": 256, 00:18:59.554 "data_size": 7936 00:18:59.554 }, 00:18:59.554 { 00:18:59.554 "name": "BaseBdev2", 00:18:59.554 "uuid": "baab8053-cceb-5d16-b426-19f1e406df0f", 00:18:59.555 "is_configured": true, 00:18:59.555 "data_offset": 256, 00:18:59.555 "data_size": 7936 00:18:59.555 } 00:18:59.555 ] 00:18:59.555 }' 00:18:59.555 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:59.555 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:59.555 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:59.555 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:59.555 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:59.555 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:59.555 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:59.555 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:18:59.555 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:18:59.555 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:59.555 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:59.555 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:59.555 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:59.555 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:59.555 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.555 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.555 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.555 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:18:59.555 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.555 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:59.555 "name": "raid_bdev1", 00:18:59.555 "uuid": "4f64f513-0242-4fa5-bdad-652e622e9ef4", 00:18:59.555 "strip_size_kb": 0, 00:18:59.555 "state": "online", 00:18:59.555 "raid_level": "raid1", 00:18:59.555 "superblock": true, 00:18:59.555 "num_base_bdevs": 2, 00:18:59.555 "num_base_bdevs_discovered": 2, 00:18:59.555 "num_base_bdevs_operational": 2, 00:18:59.555 "base_bdevs_list": [ 00:18:59.555 { 00:18:59.555 "name": "spare", 00:18:59.555 "uuid": "0f73760c-0834-5787-bee7-1f0cfb459064", 00:18:59.555 "is_configured": true, 00:18:59.555 "data_offset": 256, 00:18:59.555 "data_size": 7936 00:18:59.555 }, 00:18:59.555 { 00:18:59.555 "name": "BaseBdev2", 00:18:59.555 "uuid": "baab8053-cceb-5d16-b426-19f1e406df0f", 00:18:59.555 "is_configured": true, 00:18:59.555 "data_offset": 256, 00:18:59.555 "data_size": 7936 00:18:59.555 } 00:18:59.555 ] 00:18:59.555 }' 00:18:59.555 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:59.555 03:21:42 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:00.124 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:00.124 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.124 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:00.124 [2024-10-09 03:21:43.209299] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:00.124 [2024-10-09 03:21:43.209411] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:00.124 [2024-10-09 03:21:43.209502] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:00.124 [2024-10-09 03:21:43.209577] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:00.124 [2024-10-09 03:21:43.209626] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:00.124 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.124 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:19:00.124 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.124 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.124 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:00.124 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.124 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:00.124 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:00.124 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:00.124 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:00.124 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:00.124 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:00.124 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:00.124 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:00.124 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:00.124 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:19:00.124 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:00.124 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:00.124 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:00.384 /dev/nbd0 00:19:00.384 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:00.384 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:00.384 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:19:00.384 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:19:00.384 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:00.384 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:00.384 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:19:00.384 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:19:00.384 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:00.384 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:00.384 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:00.384 1+0 records in 00:19:00.384 1+0 records out 00:19:00.384 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000530107 s, 7.7 MB/s 00:19:00.384 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:00.384 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:19:00.384 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:00.384 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:00.384 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:19:00.384 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:00.384 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:00.384 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:00.644 /dev/nbd1 00:19:00.644 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:00.644 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:00.644 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:19:00.644 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@869 -- # local i 00:19:00.644 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:19:00.644 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:19:00.644 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:19:00.644 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # break 00:19:00.644 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:19:00.644 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:19:00.644 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:00.644 1+0 records in 00:19:00.644 1+0 records out 00:19:00.644 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277069 s, 14.8 MB/s 00:19:00.644 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:00.644 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@886 -- # size=4096 00:19:00.644 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:00.644 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:19:00.644 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # return 0 00:19:00.644 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:00.644 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:00.644 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:00.644 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:00.644 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:00.644 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:00.644 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:00.644 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:19:00.644 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:00.644 03:21:43 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:00.904 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:00.904 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:00.904 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:00.904 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:00.904 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:00.904 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:00.904 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:00.904 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:00.904 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:00.904 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:01.164 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:01.164 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:01.164 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:01.164 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:01.164 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:01.164 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:01.164 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:19:01.164 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:19:01.164 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:01.164 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:01.164 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.164 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.164 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.164 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:01.164 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.164 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.164 [2024-10-09 03:21:44.401358] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:01.164 [2024-10-09 03:21:44.401418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:01.164 [2024-10-09 03:21:44.401441] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:01.164 [2024-10-09 03:21:44.401451] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:01.164 [2024-10-09 03:21:44.403483] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:01.164 [2024-10-09 03:21:44.403581] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:01.164 [2024-10-09 03:21:44.403645] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:01.164 [2024-10-09 03:21:44.403696] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:01.164 [2024-10-09 03:21:44.403834] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:01.164 spare 00:19:01.164 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.164 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:01.164 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.164 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.424 [2024-10-09 03:21:44.503740] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:01.424 [2024-10-09 03:21:44.503769] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:01.424 [2024-10-09 03:21:44.503882] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:19:01.424 [2024-10-09 03:21:44.504002] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:01.424 [2024-10-09 03:21:44.504010] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:01.424 [2024-10-09 03:21:44.504116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:01.424 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.424 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:01.424 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:01.424 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.424 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:01.424 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:01.424 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:01.424 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.424 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.424 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.424 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.424 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.424 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.424 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.424 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.424 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.425 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.425 "name": "raid_bdev1", 00:19:01.425 "uuid": "4f64f513-0242-4fa5-bdad-652e622e9ef4", 00:19:01.425 "strip_size_kb": 0, 00:19:01.425 "state": "online", 00:19:01.425 "raid_level": "raid1", 00:19:01.425 "superblock": true, 00:19:01.425 "num_base_bdevs": 2, 00:19:01.425 "num_base_bdevs_discovered": 2, 00:19:01.425 "num_base_bdevs_operational": 2, 00:19:01.425 "base_bdevs_list": [ 00:19:01.425 { 00:19:01.425 "name": "spare", 00:19:01.425 "uuid": "0f73760c-0834-5787-bee7-1f0cfb459064", 00:19:01.425 "is_configured": true, 00:19:01.425 "data_offset": 256, 00:19:01.425 "data_size": 7936 00:19:01.425 }, 00:19:01.425 { 00:19:01.425 "name": "BaseBdev2", 00:19:01.425 "uuid": "baab8053-cceb-5d16-b426-19f1e406df0f", 00:19:01.425 "is_configured": true, 00:19:01.425 "data_offset": 256, 00:19:01.425 "data_size": 7936 00:19:01.425 } 00:19:01.425 ] 00:19:01.425 }' 00:19:01.425 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.425 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.684 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:01.684 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:01.684 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:01.684 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:01.684 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:01.684 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.684 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.684 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.684 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.684 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.944 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:01.944 "name": "raid_bdev1", 00:19:01.944 "uuid": "4f64f513-0242-4fa5-bdad-652e622e9ef4", 00:19:01.944 "strip_size_kb": 0, 00:19:01.944 "state": "online", 00:19:01.944 "raid_level": "raid1", 00:19:01.944 "superblock": true, 00:19:01.944 "num_base_bdevs": 2, 00:19:01.944 "num_base_bdevs_discovered": 2, 00:19:01.944 "num_base_bdevs_operational": 2, 00:19:01.944 "base_bdevs_list": [ 00:19:01.944 { 00:19:01.944 "name": "spare", 00:19:01.944 "uuid": "0f73760c-0834-5787-bee7-1f0cfb459064", 00:19:01.944 "is_configured": true, 00:19:01.944 "data_offset": 256, 00:19:01.944 "data_size": 7936 00:19:01.944 }, 00:19:01.944 { 00:19:01.944 "name": "BaseBdev2", 00:19:01.944 "uuid": "baab8053-cceb-5d16-b426-19f1e406df0f", 00:19:01.944 "is_configured": true, 00:19:01.944 "data_offset": 256, 00:19:01.944 "data_size": 7936 00:19:01.944 } 00:19:01.944 ] 00:19:01.944 }' 00:19:01.944 03:21:44 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:01.944 03:21:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:01.944 03:21:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:01.944 03:21:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:01.944 03:21:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.944 03:21:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:01.944 03:21:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.944 03:21:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.944 03:21:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.944 03:21:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:01.944 03:21:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:01.944 03:21:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.944 03:21:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.944 [2024-10-09 03:21:45.120620] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:01.944 03:21:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.944 03:21:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:01.944 03:21:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:01.944 03:21:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.944 03:21:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:01.944 03:21:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:01.944 03:21:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:01.944 03:21:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.944 03:21:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.944 03:21:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.944 03:21:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.944 03:21:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.944 03:21:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.944 03:21:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.944 03:21:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:01.944 03:21:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.944 03:21:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.944 "name": "raid_bdev1", 00:19:01.944 "uuid": "4f64f513-0242-4fa5-bdad-652e622e9ef4", 00:19:01.944 "strip_size_kb": 0, 00:19:01.944 "state": "online", 00:19:01.944 "raid_level": "raid1", 00:19:01.944 "superblock": true, 00:19:01.944 "num_base_bdevs": 2, 00:19:01.944 "num_base_bdevs_discovered": 1, 00:19:01.944 "num_base_bdevs_operational": 1, 00:19:01.944 "base_bdevs_list": [ 00:19:01.944 { 00:19:01.944 "name": null, 00:19:01.945 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:01.945 "is_configured": false, 00:19:01.945 "data_offset": 0, 00:19:01.945 "data_size": 7936 00:19:01.945 }, 00:19:01.945 { 00:19:01.945 "name": "BaseBdev2", 00:19:01.945 "uuid": "baab8053-cceb-5d16-b426-19f1e406df0f", 00:19:01.945 "is_configured": true, 00:19:01.945 "data_offset": 256, 00:19:01.945 "data_size": 7936 00:19:01.945 } 00:19:01.945 ] 00:19:01.945 }' 00:19:01.945 03:21:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.945 03:21:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:02.515 03:21:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:02.515 03:21:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.515 03:21:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:02.515 [2024-10-09 03:21:45.555875] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:02.515 [2024-10-09 03:21:45.556031] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:02.515 [2024-10-09 03:21:45.556091] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:02.515 [2024-10-09 03:21:45.556188] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:02.515 [2024-10-09 03:21:45.569090] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:19:02.515 03:21:45 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.515 03:21:45 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:02.515 [2024-10-09 03:21:45.571118] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:03.454 03:21:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:03.454 03:21:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:03.454 03:21:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:03.454 03:21:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:03.454 03:21:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:03.454 03:21:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.454 03:21:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.454 03:21:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.454 03:21:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:03.454 03:21:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.454 03:21:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:03.454 "name": "raid_bdev1", 00:19:03.454 "uuid": "4f64f513-0242-4fa5-bdad-652e622e9ef4", 00:19:03.454 "strip_size_kb": 0, 00:19:03.454 "state": "online", 00:19:03.454 "raid_level": "raid1", 00:19:03.454 "superblock": true, 00:19:03.454 "num_base_bdevs": 2, 00:19:03.454 "num_base_bdevs_discovered": 2, 00:19:03.454 "num_base_bdevs_operational": 2, 00:19:03.454 "process": { 00:19:03.454 "type": "rebuild", 00:19:03.454 "target": "spare", 00:19:03.454 "progress": { 00:19:03.454 "blocks": 2560, 00:19:03.454 "percent": 32 00:19:03.454 } 00:19:03.454 }, 00:19:03.454 "base_bdevs_list": [ 00:19:03.454 { 00:19:03.454 "name": "spare", 00:19:03.454 "uuid": "0f73760c-0834-5787-bee7-1f0cfb459064", 00:19:03.454 "is_configured": true, 00:19:03.454 "data_offset": 256, 00:19:03.454 "data_size": 7936 00:19:03.454 }, 00:19:03.454 { 00:19:03.454 "name": "BaseBdev2", 00:19:03.454 "uuid": "baab8053-cceb-5d16-b426-19f1e406df0f", 00:19:03.455 "is_configured": true, 00:19:03.455 "data_offset": 256, 00:19:03.455 "data_size": 7936 00:19:03.455 } 00:19:03.455 ] 00:19:03.455 }' 00:19:03.455 03:21:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:03.455 03:21:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:03.455 03:21:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:03.455 03:21:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:03.455 03:21:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:03.455 03:21:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.455 03:21:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:03.455 [2024-10-09 03:21:46.731644] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:03.714 [2024-10-09 03:21:46.779485] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:03.715 [2024-10-09 03:21:46.779601] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:03.715 [2024-10-09 03:21:46.779635] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:03.715 [2024-10-09 03:21:46.779658] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:03.715 03:21:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.715 03:21:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:03.715 03:21:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:03.715 03:21:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:03.715 03:21:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:03.715 03:21:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:03.715 03:21:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:03.715 03:21:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:03.715 03:21:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:03.715 03:21:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:03.715 03:21:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:03.715 03:21:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:03.715 03:21:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:03.715 03:21:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.715 03:21:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:03.715 03:21:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.715 03:21:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:03.715 "name": "raid_bdev1", 00:19:03.715 "uuid": "4f64f513-0242-4fa5-bdad-652e622e9ef4", 00:19:03.715 "strip_size_kb": 0, 00:19:03.715 "state": "online", 00:19:03.715 "raid_level": "raid1", 00:19:03.715 "superblock": true, 00:19:03.715 "num_base_bdevs": 2, 00:19:03.715 "num_base_bdevs_discovered": 1, 00:19:03.715 "num_base_bdevs_operational": 1, 00:19:03.715 "base_bdevs_list": [ 00:19:03.715 { 00:19:03.715 "name": null, 00:19:03.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:03.715 "is_configured": false, 00:19:03.715 "data_offset": 0, 00:19:03.715 "data_size": 7936 00:19:03.715 }, 00:19:03.715 { 00:19:03.715 "name": "BaseBdev2", 00:19:03.715 "uuid": "baab8053-cceb-5d16-b426-19f1e406df0f", 00:19:03.715 "is_configured": true, 00:19:03.715 "data_offset": 256, 00:19:03.715 "data_size": 7936 00:19:03.715 } 00:19:03.715 ] 00:19:03.715 }' 00:19:03.715 03:21:46 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:03.715 03:21:46 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:03.974 03:21:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:03.974 03:21:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.974 03:21:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:03.974 [2024-10-09 03:21:47.217580] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:03.974 [2024-10-09 03:21:47.217634] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:03.974 [2024-10-09 03:21:47.217658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:19:03.974 [2024-10-09 03:21:47.217670] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:03.974 [2024-10-09 03:21:47.217914] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:03.974 [2024-10-09 03:21:47.217933] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:03.974 [2024-10-09 03:21:47.217982] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:03.974 [2024-10-09 03:21:47.217995] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:03.974 [2024-10-09 03:21:47.218005] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:03.974 [2024-10-09 03:21:47.218048] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:03.974 [2024-10-09 03:21:47.230815] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:19:03.974 spare 00:19:03.974 03:21:47 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.974 03:21:47 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:03.975 [2024-10-09 03:21:47.232863] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:05.356 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:05.356 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:05.356 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:05.356 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:05.356 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:05.356 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.356 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.356 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.357 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:05.357 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.357 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:05.357 "name": "raid_bdev1", 00:19:05.357 "uuid": "4f64f513-0242-4fa5-bdad-652e622e9ef4", 00:19:05.357 "strip_size_kb": 0, 00:19:05.357 "state": "online", 00:19:05.357 "raid_level": "raid1", 00:19:05.357 "superblock": true, 00:19:05.357 "num_base_bdevs": 2, 00:19:05.357 "num_base_bdevs_discovered": 2, 00:19:05.357 "num_base_bdevs_operational": 2, 00:19:05.357 "process": { 00:19:05.357 "type": "rebuild", 00:19:05.357 "target": "spare", 00:19:05.357 "progress": { 00:19:05.357 "blocks": 2560, 00:19:05.357 "percent": 32 00:19:05.357 } 00:19:05.357 }, 00:19:05.357 "base_bdevs_list": [ 00:19:05.357 { 00:19:05.357 "name": "spare", 00:19:05.357 "uuid": "0f73760c-0834-5787-bee7-1f0cfb459064", 00:19:05.357 "is_configured": true, 00:19:05.357 "data_offset": 256, 00:19:05.357 "data_size": 7936 00:19:05.357 }, 00:19:05.357 { 00:19:05.357 "name": "BaseBdev2", 00:19:05.357 "uuid": "baab8053-cceb-5d16-b426-19f1e406df0f", 00:19:05.357 "is_configured": true, 00:19:05.357 "data_offset": 256, 00:19:05.357 "data_size": 7936 00:19:05.357 } 00:19:05.357 ] 00:19:05.357 }' 00:19:05.357 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:05.357 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:05.357 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:05.357 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:05.357 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:05.357 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.357 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:05.357 [2024-10-09 03:21:48.369122] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:05.357 [2024-10-09 03:21:48.440340] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:05.357 [2024-10-09 03:21:48.440391] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.357 [2024-10-09 03:21:48.440411] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:05.357 [2024-10-09 03:21:48.440419] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:05.357 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.357 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:05.357 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:05.357 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:05.357 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:05.357 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:05.357 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:05.357 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.357 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.357 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.357 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.357 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.357 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.357 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.357 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:05.357 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.357 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.357 "name": "raid_bdev1", 00:19:05.357 "uuid": "4f64f513-0242-4fa5-bdad-652e622e9ef4", 00:19:05.357 "strip_size_kb": 0, 00:19:05.357 "state": "online", 00:19:05.357 "raid_level": "raid1", 00:19:05.357 "superblock": true, 00:19:05.357 "num_base_bdevs": 2, 00:19:05.357 "num_base_bdevs_discovered": 1, 00:19:05.357 "num_base_bdevs_operational": 1, 00:19:05.357 "base_bdevs_list": [ 00:19:05.357 { 00:19:05.357 "name": null, 00:19:05.357 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.357 "is_configured": false, 00:19:05.357 "data_offset": 0, 00:19:05.357 "data_size": 7936 00:19:05.357 }, 00:19:05.357 { 00:19:05.357 "name": "BaseBdev2", 00:19:05.357 "uuid": "baab8053-cceb-5d16-b426-19f1e406df0f", 00:19:05.357 "is_configured": true, 00:19:05.357 "data_offset": 256, 00:19:05.357 "data_size": 7936 00:19:05.357 } 00:19:05.357 ] 00:19:05.357 }' 00:19:05.357 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.357 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:05.617 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:05.617 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:05.617 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:05.617 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:05.617 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:05.617 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.617 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.617 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:05.617 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.617 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.617 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:05.617 "name": "raid_bdev1", 00:19:05.617 "uuid": "4f64f513-0242-4fa5-bdad-652e622e9ef4", 00:19:05.617 "strip_size_kb": 0, 00:19:05.617 "state": "online", 00:19:05.617 "raid_level": "raid1", 00:19:05.617 "superblock": true, 00:19:05.617 "num_base_bdevs": 2, 00:19:05.617 "num_base_bdevs_discovered": 1, 00:19:05.617 "num_base_bdevs_operational": 1, 00:19:05.617 "base_bdevs_list": [ 00:19:05.617 { 00:19:05.617 "name": null, 00:19:05.617 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:05.617 "is_configured": false, 00:19:05.617 "data_offset": 0, 00:19:05.617 "data_size": 7936 00:19:05.617 }, 00:19:05.617 { 00:19:05.617 "name": "BaseBdev2", 00:19:05.617 "uuid": "baab8053-cceb-5d16-b426-19f1e406df0f", 00:19:05.617 "is_configured": true, 00:19:05.617 "data_offset": 256, 00:19:05.617 "data_size": 7936 00:19:05.617 } 00:19:05.617 ] 00:19:05.617 }' 00:19:05.617 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:05.877 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:05.877 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:05.877 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:05.877 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:05.877 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.877 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:05.877 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.877 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:05.877 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.877 03:21:48 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:05.877 [2024-10-09 03:21:48.998825] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:05.877 [2024-10-09 03:21:48.998881] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.877 [2024-10-09 03:21:48.998906] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:19:05.877 [2024-10-09 03:21:48.998916] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.877 [2024-10-09 03:21:48.999116] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.877 [2024-10-09 03:21:48.999134] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:05.877 [2024-10-09 03:21:48.999184] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:05.877 [2024-10-09 03:21:48.999197] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:05.877 [2024-10-09 03:21:48.999208] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:05.877 [2024-10-09 03:21:48.999221] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:05.877 BaseBdev1 00:19:05.877 03:21:49 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.877 03:21:49 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:06.817 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:06.817 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:06.817 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:06.817 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:06.817 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:06.817 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:06.817 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:06.817 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:06.817 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:06.817 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:06.817 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.817 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:06.817 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.817 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:06.817 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.817 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:06.817 "name": "raid_bdev1", 00:19:06.817 "uuid": "4f64f513-0242-4fa5-bdad-652e622e9ef4", 00:19:06.817 "strip_size_kb": 0, 00:19:06.817 "state": "online", 00:19:06.817 "raid_level": "raid1", 00:19:06.817 "superblock": true, 00:19:06.817 "num_base_bdevs": 2, 00:19:06.817 "num_base_bdevs_discovered": 1, 00:19:06.817 "num_base_bdevs_operational": 1, 00:19:06.817 "base_bdevs_list": [ 00:19:06.817 { 00:19:06.817 "name": null, 00:19:06.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:06.817 "is_configured": false, 00:19:06.817 "data_offset": 0, 00:19:06.817 "data_size": 7936 00:19:06.817 }, 00:19:06.817 { 00:19:06.817 "name": "BaseBdev2", 00:19:06.817 "uuid": "baab8053-cceb-5d16-b426-19f1e406df0f", 00:19:06.817 "is_configured": true, 00:19:06.817 "data_offset": 256, 00:19:06.817 "data_size": 7936 00:19:06.817 } 00:19:06.817 ] 00:19:06.817 }' 00:19:06.817 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:06.817 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:07.386 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:07.386 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:07.386 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:07.386 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:07.386 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:07.386 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.386 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.386 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.386 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:07.386 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.386 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:07.386 "name": "raid_bdev1", 00:19:07.386 "uuid": "4f64f513-0242-4fa5-bdad-652e622e9ef4", 00:19:07.386 "strip_size_kb": 0, 00:19:07.386 "state": "online", 00:19:07.386 "raid_level": "raid1", 00:19:07.386 "superblock": true, 00:19:07.386 "num_base_bdevs": 2, 00:19:07.386 "num_base_bdevs_discovered": 1, 00:19:07.386 "num_base_bdevs_operational": 1, 00:19:07.386 "base_bdevs_list": [ 00:19:07.386 { 00:19:07.386 "name": null, 00:19:07.386 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:07.386 "is_configured": false, 00:19:07.386 "data_offset": 0, 00:19:07.386 "data_size": 7936 00:19:07.386 }, 00:19:07.386 { 00:19:07.386 "name": "BaseBdev2", 00:19:07.386 "uuid": "baab8053-cceb-5d16-b426-19f1e406df0f", 00:19:07.386 "is_configured": true, 00:19:07.386 "data_offset": 256, 00:19:07.386 "data_size": 7936 00:19:07.386 } 00:19:07.386 ] 00:19:07.386 }' 00:19:07.386 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:07.386 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:07.386 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:07.386 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:07.386 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:07.386 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@650 -- # local es=0 00:19:07.386 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:07.386 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:07.386 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:07.386 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:07.386 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:07.386 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:07.386 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.386 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:07.386 [2024-10-09 03:21:50.556683] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:07.386 [2024-10-09 03:21:50.556802] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:07.386 [2024-10-09 03:21:50.556817] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:07.386 request: 00:19:07.386 { 00:19:07.386 "base_bdev": "BaseBdev1", 00:19:07.386 "raid_bdev": "raid_bdev1", 00:19:07.386 "method": "bdev_raid_add_base_bdev", 00:19:07.386 "req_id": 1 00:19:07.386 } 00:19:07.386 Got JSON-RPC error response 00:19:07.386 response: 00:19:07.386 { 00:19:07.386 "code": -22, 00:19:07.386 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:07.386 } 00:19:07.386 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:07.386 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@653 -- # es=1 00:19:07.386 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:07.386 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:07.386 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:07.386 03:21:50 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:08.327 03:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:08.327 03:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:08.327 03:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:08.327 03:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:08.327 03:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:08.327 03:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:08.327 03:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:08.327 03:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:08.327 03:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:08.327 03:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:08.327 03:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.327 03:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.327 03:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.327 03:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:08.327 03:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.327 03:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.327 "name": "raid_bdev1", 00:19:08.327 "uuid": "4f64f513-0242-4fa5-bdad-652e622e9ef4", 00:19:08.327 "strip_size_kb": 0, 00:19:08.327 "state": "online", 00:19:08.327 "raid_level": "raid1", 00:19:08.327 "superblock": true, 00:19:08.327 "num_base_bdevs": 2, 00:19:08.327 "num_base_bdevs_discovered": 1, 00:19:08.327 "num_base_bdevs_operational": 1, 00:19:08.327 "base_bdevs_list": [ 00:19:08.327 { 00:19:08.327 "name": null, 00:19:08.327 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.327 "is_configured": false, 00:19:08.327 "data_offset": 0, 00:19:08.327 "data_size": 7936 00:19:08.327 }, 00:19:08.327 { 00:19:08.327 "name": "BaseBdev2", 00:19:08.327 "uuid": "baab8053-cceb-5d16-b426-19f1e406df0f", 00:19:08.327 "is_configured": true, 00:19:08.327 "data_offset": 256, 00:19:08.327 "data_size": 7936 00:19:08.327 } 00:19:08.327 ] 00:19:08.327 }' 00:19:08.327 03:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.327 03:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:08.897 03:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:08.897 03:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:08.897 03:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:08.897 03:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:08.897 03:21:51 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:08.897 03:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:08.897 03:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:08.897 03:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.897 03:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:08.897 03:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.897 03:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:08.897 "name": "raid_bdev1", 00:19:08.897 "uuid": "4f64f513-0242-4fa5-bdad-652e622e9ef4", 00:19:08.897 "strip_size_kb": 0, 00:19:08.897 "state": "online", 00:19:08.897 "raid_level": "raid1", 00:19:08.897 "superblock": true, 00:19:08.897 "num_base_bdevs": 2, 00:19:08.897 "num_base_bdevs_discovered": 1, 00:19:08.897 "num_base_bdevs_operational": 1, 00:19:08.897 "base_bdevs_list": [ 00:19:08.897 { 00:19:08.897 "name": null, 00:19:08.897 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.897 "is_configured": false, 00:19:08.897 "data_offset": 0, 00:19:08.897 "data_size": 7936 00:19:08.897 }, 00:19:08.897 { 00:19:08.897 "name": "BaseBdev2", 00:19:08.897 "uuid": "baab8053-cceb-5d16-b426-19f1e406df0f", 00:19:08.897 "is_configured": true, 00:19:08.897 "data_offset": 256, 00:19:08.897 "data_size": 7936 00:19:08.897 } 00:19:08.897 ] 00:19:08.897 }' 00:19:08.897 03:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:08.897 03:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:08.897 03:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:08.898 03:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:08.898 03:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88063 00:19:08.898 03:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@950 -- # '[' -z 88063 ']' 00:19:08.898 03:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # kill -0 88063 00:19:08.898 03:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # uname 00:19:08.898 03:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:08.898 03:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88063 00:19:08.898 03:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:08.898 killing process with pid 88063 00:19:08.898 Received shutdown signal, test time was about 60.000000 seconds 00:19:08.898 00:19:08.898 Latency(us) 00:19:08.898 [2024-10-09T03:21:52.201Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:08.898 [2024-10-09T03:21:52.201Z] =================================================================================================================== 00:19:08.898 [2024-10-09T03:21:52.201Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:08.898 03:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:08.898 03:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88063' 00:19:08.898 03:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@969 -- # kill 88063 00:19:08.898 [2024-10-09 03:21:52.145347] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:08.898 [2024-10-09 03:21:52.145442] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:08.898 [2024-10-09 03:21:52.145478] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:08.898 [2024-10-09 03:21:52.145490] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:08.898 03:21:52 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@974 -- # wait 88063 00:19:09.467 [2024-10-09 03:21:52.477307] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:10.860 03:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:19:10.860 00:19:10.860 real 0m19.573s 00:19:10.860 user 0m25.122s 00:19:10.860 sys 0m2.663s 00:19:10.860 ************************************ 00:19:10.860 END TEST raid_rebuild_test_sb_md_separate 00:19:10.860 ************************************ 00:19:10.860 03:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:10.860 03:21:53 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:19:10.860 03:21:53 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:19:10.860 03:21:53 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:19:10.860 03:21:53 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:19:10.860 03:21:53 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:10.860 03:21:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:10.860 ************************************ 00:19:10.860 START TEST raid_state_function_test_sb_md_interleaved 00:19:10.860 ************************************ 00:19:10.860 03:21:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_state_function_test raid1 2 true 00:19:10.860 03:21:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:10.860 03:21:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:10.860 03:21:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:10.860 03:21:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:10.860 03:21:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:10.860 03:21:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:10.860 03:21:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:10.860 03:21:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:10.860 03:21:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:10.860 03:21:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:10.860 03:21:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:10.860 03:21:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:10.860 03:21:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:10.860 03:21:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:10.860 03:21:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:10.860 03:21:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:10.860 03:21:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:10.860 03:21:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:10.860 03:21:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:10.860 03:21:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:10.860 03:21:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:10.860 03:21:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:10.860 Process raid pid: 88751 00:19:10.860 03:21:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88751 00:19:10.860 03:21:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:10.860 03:21:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88751' 00:19:10.860 03:21:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88751 00:19:10.860 03:21:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 88751 ']' 00:19:10.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:10.860 03:21:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:10.860 03:21:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:10.860 03:21:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:10.860 03:21:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:10.860 03:21:53 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:10.860 [2024-10-09 03:21:53.948743] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:19:10.860 [2024-10-09 03:21:53.948912] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:10.860 [2024-10-09 03:21:54.113309] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.133 [2024-10-09 03:21:54.355141] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:11.393 [2024-10-09 03:21:54.589150] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:11.393 [2024-10-09 03:21:54.589193] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:11.653 03:21:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:11.653 03:21:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:19:11.653 03:21:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:11.653 03:21:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.653 03:21:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.653 [2024-10-09 03:21:54.806738] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:11.653 [2024-10-09 03:21:54.806797] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:11.653 [2024-10-09 03:21:54.806812] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:11.653 [2024-10-09 03:21:54.806822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:11.653 03:21:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.653 03:21:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:11.653 03:21:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:11.653 03:21:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:11.653 03:21:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:11.653 03:21:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:11.653 03:21:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:11.653 03:21:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:11.653 03:21:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:11.653 03:21:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:11.653 03:21:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:11.653 03:21:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.653 03:21:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:11.653 03:21:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.653 03:21:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:11.653 03:21:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.653 03:21:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:11.653 "name": "Existed_Raid", 00:19:11.653 "uuid": "d2c2a0e5-ef45-4eb1-a946-f2c15fd1ff20", 00:19:11.653 "strip_size_kb": 0, 00:19:11.653 "state": "configuring", 00:19:11.653 "raid_level": "raid1", 00:19:11.653 "superblock": true, 00:19:11.653 "num_base_bdevs": 2, 00:19:11.653 "num_base_bdevs_discovered": 0, 00:19:11.653 "num_base_bdevs_operational": 2, 00:19:11.653 "base_bdevs_list": [ 00:19:11.653 { 00:19:11.653 "name": "BaseBdev1", 00:19:11.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.653 "is_configured": false, 00:19:11.653 "data_offset": 0, 00:19:11.653 "data_size": 0 00:19:11.653 }, 00:19:11.653 { 00:19:11.653 "name": "BaseBdev2", 00:19:11.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:11.653 "is_configured": false, 00:19:11.653 "data_offset": 0, 00:19:11.653 "data_size": 0 00:19:11.653 } 00:19:11.653 ] 00:19:11.653 }' 00:19:11.653 03:21:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:11.653 03:21:54 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.222 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:12.222 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.222 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.222 [2024-10-09 03:21:55.249948] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:12.222 [2024-10-09 03:21:55.249984] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:12.222 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.222 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:12.222 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.222 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.222 [2024-10-09 03:21:55.257972] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:12.222 [2024-10-09 03:21:55.258012] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:12.222 [2024-10-09 03:21:55.258019] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:12.222 [2024-10-09 03:21:55.258031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:12.222 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.222 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:19:12.222 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.222 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.222 [2024-10-09 03:21:55.341702] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:12.222 BaseBdev1 00:19:12.222 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.222 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:12.223 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev1 00:19:12.223 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:12.223 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:19:12.223 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:12.223 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:12.223 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:19:12.223 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.223 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.223 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.223 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:12.223 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.223 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.223 [ 00:19:12.223 { 00:19:12.223 "name": "BaseBdev1", 00:19:12.223 "aliases": [ 00:19:12.223 "d0a64b4b-e437-4ecc-9a42-f4270fcd84db" 00:19:12.223 ], 00:19:12.223 "product_name": "Malloc disk", 00:19:12.223 "block_size": 4128, 00:19:12.223 "num_blocks": 8192, 00:19:12.223 "uuid": "d0a64b4b-e437-4ecc-9a42-f4270fcd84db", 00:19:12.223 "md_size": 32, 00:19:12.223 "md_interleave": true, 00:19:12.223 "dif_type": 0, 00:19:12.223 "assigned_rate_limits": { 00:19:12.223 "rw_ios_per_sec": 0, 00:19:12.223 "rw_mbytes_per_sec": 0, 00:19:12.223 "r_mbytes_per_sec": 0, 00:19:12.223 "w_mbytes_per_sec": 0 00:19:12.223 }, 00:19:12.223 "claimed": true, 00:19:12.223 "claim_type": "exclusive_write", 00:19:12.223 "zoned": false, 00:19:12.223 "supported_io_types": { 00:19:12.223 "read": true, 00:19:12.223 "write": true, 00:19:12.223 "unmap": true, 00:19:12.223 "flush": true, 00:19:12.223 "reset": true, 00:19:12.223 "nvme_admin": false, 00:19:12.223 "nvme_io": false, 00:19:12.223 "nvme_io_md": false, 00:19:12.223 "write_zeroes": true, 00:19:12.223 "zcopy": true, 00:19:12.223 "get_zone_info": false, 00:19:12.223 "zone_management": false, 00:19:12.223 "zone_append": false, 00:19:12.223 "compare": false, 00:19:12.223 "compare_and_write": false, 00:19:12.223 "abort": true, 00:19:12.223 "seek_hole": false, 00:19:12.223 "seek_data": false, 00:19:12.223 "copy": true, 00:19:12.223 "nvme_iov_md": false 00:19:12.223 }, 00:19:12.223 "memory_domains": [ 00:19:12.223 { 00:19:12.223 "dma_device_id": "system", 00:19:12.223 "dma_device_type": 1 00:19:12.223 }, 00:19:12.223 { 00:19:12.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:12.223 "dma_device_type": 2 00:19:12.223 } 00:19:12.223 ], 00:19:12.223 "driver_specific": {} 00:19:12.223 } 00:19:12.223 ] 00:19:12.223 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.223 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:19:12.223 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:12.223 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:12.223 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:12.223 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:12.223 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:12.223 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:12.223 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.223 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.223 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.223 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.223 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.223 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:12.223 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.223 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.223 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.223 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.223 "name": "Existed_Raid", 00:19:12.223 "uuid": "b1cb4568-1b9d-46ab-91b3-ea43b4dad482", 00:19:12.223 "strip_size_kb": 0, 00:19:12.223 "state": "configuring", 00:19:12.223 "raid_level": "raid1", 00:19:12.223 "superblock": true, 00:19:12.223 "num_base_bdevs": 2, 00:19:12.223 "num_base_bdevs_discovered": 1, 00:19:12.223 "num_base_bdevs_operational": 2, 00:19:12.223 "base_bdevs_list": [ 00:19:12.223 { 00:19:12.223 "name": "BaseBdev1", 00:19:12.223 "uuid": "d0a64b4b-e437-4ecc-9a42-f4270fcd84db", 00:19:12.223 "is_configured": true, 00:19:12.223 "data_offset": 256, 00:19:12.223 "data_size": 7936 00:19:12.223 }, 00:19:12.223 { 00:19:12.223 "name": "BaseBdev2", 00:19:12.223 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.223 "is_configured": false, 00:19:12.223 "data_offset": 0, 00:19:12.223 "data_size": 0 00:19:12.223 } 00:19:12.223 ] 00:19:12.223 }' 00:19:12.223 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.223 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.792 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:12.792 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.792 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.792 [2024-10-09 03:21:55.828907] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:12.792 [2024-10-09 03:21:55.828987] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:12.792 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.792 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:12.792 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.792 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.792 [2024-10-09 03:21:55.840932] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:12.792 [2024-10-09 03:21:55.842949] bdev.c:8281:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:12.792 [2024-10-09 03:21:55.843022] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:12.792 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.792 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:12.792 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:12.792 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:12.792 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:12.792 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:12.792 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:12.792 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:12.792 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:12.792 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:12.793 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:12.793 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:12.793 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:12.793 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:12.793 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.793 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:12.793 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:12.793 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.793 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:12.793 "name": "Existed_Raid", 00:19:12.793 "uuid": "c5f85747-2356-4138-bfa5-6af55ae632b8", 00:19:12.793 "strip_size_kb": 0, 00:19:12.793 "state": "configuring", 00:19:12.793 "raid_level": "raid1", 00:19:12.793 "superblock": true, 00:19:12.793 "num_base_bdevs": 2, 00:19:12.793 "num_base_bdevs_discovered": 1, 00:19:12.793 "num_base_bdevs_operational": 2, 00:19:12.793 "base_bdevs_list": [ 00:19:12.793 { 00:19:12.793 "name": "BaseBdev1", 00:19:12.793 "uuid": "d0a64b4b-e437-4ecc-9a42-f4270fcd84db", 00:19:12.793 "is_configured": true, 00:19:12.793 "data_offset": 256, 00:19:12.793 "data_size": 7936 00:19:12.793 }, 00:19:12.793 { 00:19:12.793 "name": "BaseBdev2", 00:19:12.793 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:12.793 "is_configured": false, 00:19:12.793 "data_offset": 0, 00:19:12.793 "data_size": 0 00:19:12.793 } 00:19:12.793 ] 00:19:12.793 }' 00:19:12.793 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:12.793 03:21:55 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.052 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:19:13.052 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.053 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.053 [2024-10-09 03:21:56.335632] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:13.053 [2024-10-09 03:21:56.335819] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:13.053 [2024-10-09 03:21:56.335834] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:13.053 [2024-10-09 03:21:56.335966] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:13.053 [2024-10-09 03:21:56.336036] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:13.053 [2024-10-09 03:21:56.336049] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:13.053 [2024-10-09 03:21:56.336106] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:13.053 BaseBdev2 00:19:13.053 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.053 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:13.053 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local bdev_name=BaseBdev2 00:19:13.053 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:19:13.053 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@901 -- # local i 00:19:13.053 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:19:13.053 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:19:13.053 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # rpc_cmd bdev_wait_for_examine 00:19:13.053 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.053 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.053 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.053 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:13.053 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.053 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.312 [ 00:19:13.312 { 00:19:13.312 "name": "BaseBdev2", 00:19:13.312 "aliases": [ 00:19:13.312 "88b2bed7-2ed9-41d2-ae8b-963391829aa0" 00:19:13.312 ], 00:19:13.312 "product_name": "Malloc disk", 00:19:13.312 "block_size": 4128, 00:19:13.312 "num_blocks": 8192, 00:19:13.312 "uuid": "88b2bed7-2ed9-41d2-ae8b-963391829aa0", 00:19:13.312 "md_size": 32, 00:19:13.312 "md_interleave": true, 00:19:13.312 "dif_type": 0, 00:19:13.312 "assigned_rate_limits": { 00:19:13.312 "rw_ios_per_sec": 0, 00:19:13.312 "rw_mbytes_per_sec": 0, 00:19:13.312 "r_mbytes_per_sec": 0, 00:19:13.312 "w_mbytes_per_sec": 0 00:19:13.312 }, 00:19:13.312 "claimed": true, 00:19:13.312 "claim_type": "exclusive_write", 00:19:13.312 "zoned": false, 00:19:13.312 "supported_io_types": { 00:19:13.312 "read": true, 00:19:13.312 "write": true, 00:19:13.312 "unmap": true, 00:19:13.312 "flush": true, 00:19:13.312 "reset": true, 00:19:13.312 "nvme_admin": false, 00:19:13.312 "nvme_io": false, 00:19:13.312 "nvme_io_md": false, 00:19:13.312 "write_zeroes": true, 00:19:13.312 "zcopy": true, 00:19:13.312 "get_zone_info": false, 00:19:13.312 "zone_management": false, 00:19:13.312 "zone_append": false, 00:19:13.312 "compare": false, 00:19:13.312 "compare_and_write": false, 00:19:13.312 "abort": true, 00:19:13.312 "seek_hole": false, 00:19:13.312 "seek_data": false, 00:19:13.312 "copy": true, 00:19:13.312 "nvme_iov_md": false 00:19:13.312 }, 00:19:13.312 "memory_domains": [ 00:19:13.312 { 00:19:13.312 "dma_device_id": "system", 00:19:13.312 "dma_device_type": 1 00:19:13.312 }, 00:19:13.312 { 00:19:13.312 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.312 "dma_device_type": 2 00:19:13.313 } 00:19:13.313 ], 00:19:13.313 "driver_specific": {} 00:19:13.313 } 00:19:13.313 ] 00:19:13.313 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.313 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@907 -- # return 0 00:19:13.313 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:13.313 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:13.313 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:13.313 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:13.313 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:13.313 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:13.313 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:13.313 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:13.313 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.313 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.313 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.313 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.313 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.313 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.313 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.313 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.313 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.313 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:13.313 "name": "Existed_Raid", 00:19:13.313 "uuid": "c5f85747-2356-4138-bfa5-6af55ae632b8", 00:19:13.313 "strip_size_kb": 0, 00:19:13.313 "state": "online", 00:19:13.313 "raid_level": "raid1", 00:19:13.313 "superblock": true, 00:19:13.313 "num_base_bdevs": 2, 00:19:13.313 "num_base_bdevs_discovered": 2, 00:19:13.313 "num_base_bdevs_operational": 2, 00:19:13.313 "base_bdevs_list": [ 00:19:13.313 { 00:19:13.313 "name": "BaseBdev1", 00:19:13.313 "uuid": "d0a64b4b-e437-4ecc-9a42-f4270fcd84db", 00:19:13.313 "is_configured": true, 00:19:13.313 "data_offset": 256, 00:19:13.313 "data_size": 7936 00:19:13.313 }, 00:19:13.313 { 00:19:13.313 "name": "BaseBdev2", 00:19:13.313 "uuid": "88b2bed7-2ed9-41d2-ae8b-963391829aa0", 00:19:13.313 "is_configured": true, 00:19:13.313 "data_offset": 256, 00:19:13.313 "data_size": 7936 00:19:13.313 } 00:19:13.313 ] 00:19:13.313 }' 00:19:13.313 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:13.313 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.572 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:13.572 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:13.572 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:13.572 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:13.573 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:13.573 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:13.573 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:13.573 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.573 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.573 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:13.573 [2024-10-09 03:21:56.775165] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:13.573 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.573 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:13.573 "name": "Existed_Raid", 00:19:13.573 "aliases": [ 00:19:13.573 "c5f85747-2356-4138-bfa5-6af55ae632b8" 00:19:13.573 ], 00:19:13.573 "product_name": "Raid Volume", 00:19:13.573 "block_size": 4128, 00:19:13.573 "num_blocks": 7936, 00:19:13.573 "uuid": "c5f85747-2356-4138-bfa5-6af55ae632b8", 00:19:13.573 "md_size": 32, 00:19:13.573 "md_interleave": true, 00:19:13.573 "dif_type": 0, 00:19:13.573 "assigned_rate_limits": { 00:19:13.573 "rw_ios_per_sec": 0, 00:19:13.573 "rw_mbytes_per_sec": 0, 00:19:13.573 "r_mbytes_per_sec": 0, 00:19:13.573 "w_mbytes_per_sec": 0 00:19:13.573 }, 00:19:13.573 "claimed": false, 00:19:13.573 "zoned": false, 00:19:13.573 "supported_io_types": { 00:19:13.573 "read": true, 00:19:13.573 "write": true, 00:19:13.573 "unmap": false, 00:19:13.573 "flush": false, 00:19:13.573 "reset": true, 00:19:13.573 "nvme_admin": false, 00:19:13.573 "nvme_io": false, 00:19:13.573 "nvme_io_md": false, 00:19:13.573 "write_zeroes": true, 00:19:13.573 "zcopy": false, 00:19:13.573 "get_zone_info": false, 00:19:13.573 "zone_management": false, 00:19:13.573 "zone_append": false, 00:19:13.573 "compare": false, 00:19:13.573 "compare_and_write": false, 00:19:13.573 "abort": false, 00:19:13.573 "seek_hole": false, 00:19:13.573 "seek_data": false, 00:19:13.573 "copy": false, 00:19:13.573 "nvme_iov_md": false 00:19:13.573 }, 00:19:13.573 "memory_domains": [ 00:19:13.573 { 00:19:13.573 "dma_device_id": "system", 00:19:13.573 "dma_device_type": 1 00:19:13.573 }, 00:19:13.573 { 00:19:13.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.573 "dma_device_type": 2 00:19:13.573 }, 00:19:13.573 { 00:19:13.573 "dma_device_id": "system", 00:19:13.573 "dma_device_type": 1 00:19:13.573 }, 00:19:13.573 { 00:19:13.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:13.573 "dma_device_type": 2 00:19:13.573 } 00:19:13.573 ], 00:19:13.573 "driver_specific": { 00:19:13.573 "raid": { 00:19:13.573 "uuid": "c5f85747-2356-4138-bfa5-6af55ae632b8", 00:19:13.573 "strip_size_kb": 0, 00:19:13.573 "state": "online", 00:19:13.573 "raid_level": "raid1", 00:19:13.573 "superblock": true, 00:19:13.573 "num_base_bdevs": 2, 00:19:13.573 "num_base_bdevs_discovered": 2, 00:19:13.573 "num_base_bdevs_operational": 2, 00:19:13.573 "base_bdevs_list": [ 00:19:13.573 { 00:19:13.573 "name": "BaseBdev1", 00:19:13.573 "uuid": "d0a64b4b-e437-4ecc-9a42-f4270fcd84db", 00:19:13.573 "is_configured": true, 00:19:13.573 "data_offset": 256, 00:19:13.573 "data_size": 7936 00:19:13.573 }, 00:19:13.573 { 00:19:13.573 "name": "BaseBdev2", 00:19:13.573 "uuid": "88b2bed7-2ed9-41d2-ae8b-963391829aa0", 00:19:13.573 "is_configured": true, 00:19:13.573 "data_offset": 256, 00:19:13.573 "data_size": 7936 00:19:13.573 } 00:19:13.573 ] 00:19:13.573 } 00:19:13.573 } 00:19:13.573 }' 00:19:13.573 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:13.573 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:13.573 BaseBdev2' 00:19:13.573 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:13.833 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:13.833 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:13.833 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:13.833 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:13.833 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.833 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.833 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.833 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:13.833 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:13.833 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:13.833 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:13.833 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.833 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:13.833 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.833 03:21:56 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.833 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:13.833 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:13.833 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:13.833 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.833 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:13.833 [2024-10-09 03:21:57.014610] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:13.833 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:13.833 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:13.833 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:13.833 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:13.833 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:19:13.833 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:13.833 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:13.833 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:13.833 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:13.833 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:13.833 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:13.833 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:13.833 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:13.833 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:13.833 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:13.833 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:13.833 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.833 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:13.833 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:13.833 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.093 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.093 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:14.093 "name": "Existed_Raid", 00:19:14.093 "uuid": "c5f85747-2356-4138-bfa5-6af55ae632b8", 00:19:14.093 "strip_size_kb": 0, 00:19:14.093 "state": "online", 00:19:14.093 "raid_level": "raid1", 00:19:14.093 "superblock": true, 00:19:14.093 "num_base_bdevs": 2, 00:19:14.093 "num_base_bdevs_discovered": 1, 00:19:14.093 "num_base_bdevs_operational": 1, 00:19:14.093 "base_bdevs_list": [ 00:19:14.093 { 00:19:14.093 "name": null, 00:19:14.093 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:14.093 "is_configured": false, 00:19:14.093 "data_offset": 0, 00:19:14.093 "data_size": 7936 00:19:14.093 }, 00:19:14.093 { 00:19:14.093 "name": "BaseBdev2", 00:19:14.093 "uuid": "88b2bed7-2ed9-41d2-ae8b-963391829aa0", 00:19:14.093 "is_configured": true, 00:19:14.093 "data_offset": 256, 00:19:14.093 "data_size": 7936 00:19:14.093 } 00:19:14.093 ] 00:19:14.093 }' 00:19:14.093 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:14.093 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.353 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:14.354 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:14.354 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.354 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.354 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.354 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:14.354 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.354 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:14.354 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:14.354 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:14.354 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.354 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.354 [2024-10-09 03:21:57.624003] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:14.354 [2024-10-09 03:21:57.624191] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:14.614 [2024-10-09 03:21:57.723349] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:14.614 [2024-10-09 03:21:57.723479] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:14.614 [2024-10-09 03:21:57.723521] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:14.614 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.614 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:14.614 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:14.614 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:14.614 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:14.614 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.614 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:14.614 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.614 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:14.614 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:14.614 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:14.614 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88751 00:19:14.614 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 88751 ']' 00:19:14.614 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 88751 00:19:14.614 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:19:14.614 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:14.614 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 88751 00:19:14.614 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:14.614 killing process with pid 88751 00:19:14.614 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:14.614 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 88751' 00:19:14.614 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 88751 00:19:14.614 [2024-10-09 03:21:57.822719] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:14.614 03:21:57 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 88751 00:19:14.614 [2024-10-09 03:21:57.839335] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:15.995 03:21:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:19:15.995 00:19:15.995 real 0m5.300s 00:19:15.995 user 0m7.408s 00:19:15.995 sys 0m0.956s 00:19:15.995 03:21:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:15.995 03:21:59 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:15.995 ************************************ 00:19:15.995 END TEST raid_state_function_test_sb_md_interleaved 00:19:15.995 ************************************ 00:19:15.995 03:21:59 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:19:15.995 03:21:59 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:19:15.995 03:21:59 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:15.995 03:21:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:15.995 ************************************ 00:19:15.995 START TEST raid_superblock_test_md_interleaved 00:19:15.995 ************************************ 00:19:15.995 03:21:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1125 -- # raid_superblock_test raid1 2 00:19:15.995 03:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:15.995 03:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:15.995 03:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:15.995 03:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:15.995 03:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:15.995 03:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:15.995 03:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:15.995 03:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:15.995 03:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:15.995 03:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:15.995 03:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:15.995 03:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:15.995 03:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:15.995 03:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:15.995 03:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:15.995 03:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89003 00:19:15.995 03:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:15.995 03:21:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89003 00:19:15.995 03:21:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 89003 ']' 00:19:15.995 03:21:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:15.995 03:21:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:15.995 03:21:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:15.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:15.995 03:21:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:15.995 03:21:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:16.255 [2024-10-09 03:21:59.325296] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:19:16.255 [2024-10-09 03:21:59.325506] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89003 ] 00:19:16.255 [2024-10-09 03:21:59.490555] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:16.514 [2024-10-09 03:21:59.731234] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:16.774 [2024-10-09 03:21:59.955490] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:16.774 [2024-10-09 03:21:59.955611] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:17.034 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:17.034 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:19:17.034 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:17.034 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:17.034 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:17.034 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:17.034 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:17.034 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:17.034 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:17.034 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:17.034 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:19:17.034 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.034 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.034 malloc1 00:19:17.034 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.034 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:17.034 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.034 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.034 [2024-10-09 03:22:00.197766] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:17.034 [2024-10-09 03:22:00.197924] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:17.034 [2024-10-09 03:22:00.197967] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:17.034 [2024-10-09 03:22:00.197996] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:17.034 [2024-10-09 03:22:00.199918] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:17.034 [2024-10-09 03:22:00.199990] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:17.034 pt1 00:19:17.034 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.034 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:17.034 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:17.034 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:17.034 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:17.034 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:17.034 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:17.034 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:17.034 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:17.034 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:19:17.034 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.034 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.034 malloc2 00:19:17.034 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.034 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:17.034 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.034 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.034 [2024-10-09 03:22:00.286339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:17.034 [2024-10-09 03:22:00.286395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:17.034 [2024-10-09 03:22:00.286420] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:17.034 [2024-10-09 03:22:00.286429] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:17.034 [2024-10-09 03:22:00.288277] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:17.034 [2024-10-09 03:22:00.288312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:17.034 pt2 00:19:17.034 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.034 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:17.034 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:17.034 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:17.035 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.035 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.035 [2024-10-09 03:22:00.298392] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:17.035 [2024-10-09 03:22:00.300368] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:17.035 [2024-10-09 03:22:00.300561] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:17.035 [2024-10-09 03:22:00.300576] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:17.035 [2024-10-09 03:22:00.300646] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:17.035 [2024-10-09 03:22:00.300710] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:17.035 [2024-10-09 03:22:00.300725] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:17.035 [2024-10-09 03:22:00.300800] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:17.035 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.035 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:17.035 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:17.035 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:17.035 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:17.035 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:17.035 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:17.035 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:17.035 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:17.035 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:17.035 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:17.035 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.035 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.035 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.035 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.035 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.294 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:17.294 "name": "raid_bdev1", 00:19:17.294 "uuid": "f465aafa-7b0f-4b33-bbb6-abd1d03b40a2", 00:19:17.294 "strip_size_kb": 0, 00:19:17.294 "state": "online", 00:19:17.294 "raid_level": "raid1", 00:19:17.294 "superblock": true, 00:19:17.294 "num_base_bdevs": 2, 00:19:17.294 "num_base_bdevs_discovered": 2, 00:19:17.294 "num_base_bdevs_operational": 2, 00:19:17.294 "base_bdevs_list": [ 00:19:17.294 { 00:19:17.294 "name": "pt1", 00:19:17.294 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:17.294 "is_configured": true, 00:19:17.294 "data_offset": 256, 00:19:17.294 "data_size": 7936 00:19:17.294 }, 00:19:17.294 { 00:19:17.294 "name": "pt2", 00:19:17.294 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:17.294 "is_configured": true, 00:19:17.294 "data_offset": 256, 00:19:17.294 "data_size": 7936 00:19:17.294 } 00:19:17.294 ] 00:19:17.294 }' 00:19:17.294 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:17.294 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.555 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:17.555 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:17.555 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:17.555 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:17.555 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:17.555 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:17.555 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:17.555 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:17.555 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.555 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.555 [2024-10-09 03:22:00.713897] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:17.555 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.555 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:17.555 "name": "raid_bdev1", 00:19:17.555 "aliases": [ 00:19:17.555 "f465aafa-7b0f-4b33-bbb6-abd1d03b40a2" 00:19:17.555 ], 00:19:17.555 "product_name": "Raid Volume", 00:19:17.555 "block_size": 4128, 00:19:17.555 "num_blocks": 7936, 00:19:17.555 "uuid": "f465aafa-7b0f-4b33-bbb6-abd1d03b40a2", 00:19:17.555 "md_size": 32, 00:19:17.555 "md_interleave": true, 00:19:17.555 "dif_type": 0, 00:19:17.555 "assigned_rate_limits": { 00:19:17.555 "rw_ios_per_sec": 0, 00:19:17.555 "rw_mbytes_per_sec": 0, 00:19:17.555 "r_mbytes_per_sec": 0, 00:19:17.555 "w_mbytes_per_sec": 0 00:19:17.555 }, 00:19:17.555 "claimed": false, 00:19:17.555 "zoned": false, 00:19:17.555 "supported_io_types": { 00:19:17.555 "read": true, 00:19:17.555 "write": true, 00:19:17.555 "unmap": false, 00:19:17.555 "flush": false, 00:19:17.555 "reset": true, 00:19:17.555 "nvme_admin": false, 00:19:17.555 "nvme_io": false, 00:19:17.555 "nvme_io_md": false, 00:19:17.555 "write_zeroes": true, 00:19:17.555 "zcopy": false, 00:19:17.555 "get_zone_info": false, 00:19:17.555 "zone_management": false, 00:19:17.555 "zone_append": false, 00:19:17.555 "compare": false, 00:19:17.555 "compare_and_write": false, 00:19:17.555 "abort": false, 00:19:17.555 "seek_hole": false, 00:19:17.555 "seek_data": false, 00:19:17.555 "copy": false, 00:19:17.555 "nvme_iov_md": false 00:19:17.555 }, 00:19:17.555 "memory_domains": [ 00:19:17.555 { 00:19:17.555 "dma_device_id": "system", 00:19:17.555 "dma_device_type": 1 00:19:17.555 }, 00:19:17.555 { 00:19:17.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.555 "dma_device_type": 2 00:19:17.555 }, 00:19:17.555 { 00:19:17.555 "dma_device_id": "system", 00:19:17.555 "dma_device_type": 1 00:19:17.555 }, 00:19:17.555 { 00:19:17.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:17.555 "dma_device_type": 2 00:19:17.555 } 00:19:17.555 ], 00:19:17.555 "driver_specific": { 00:19:17.555 "raid": { 00:19:17.555 "uuid": "f465aafa-7b0f-4b33-bbb6-abd1d03b40a2", 00:19:17.555 "strip_size_kb": 0, 00:19:17.555 "state": "online", 00:19:17.555 "raid_level": "raid1", 00:19:17.555 "superblock": true, 00:19:17.555 "num_base_bdevs": 2, 00:19:17.555 "num_base_bdevs_discovered": 2, 00:19:17.555 "num_base_bdevs_operational": 2, 00:19:17.555 "base_bdevs_list": [ 00:19:17.555 { 00:19:17.555 "name": "pt1", 00:19:17.555 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:17.555 "is_configured": true, 00:19:17.555 "data_offset": 256, 00:19:17.555 "data_size": 7936 00:19:17.555 }, 00:19:17.555 { 00:19:17.555 "name": "pt2", 00:19:17.555 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:17.555 "is_configured": true, 00:19:17.555 "data_offset": 256, 00:19:17.555 "data_size": 7936 00:19:17.555 } 00:19:17.555 ] 00:19:17.555 } 00:19:17.555 } 00:19:17.555 }' 00:19:17.555 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:17.555 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:17.555 pt2' 00:19:17.555 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:17.555 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:17.555 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:17.555 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:17.555 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.555 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.555 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:17.815 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.815 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:17.815 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:17.815 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:17.815 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:17.815 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.815 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.815 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:17.815 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.815 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:17.815 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:17.815 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:17.815 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:17.815 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.815 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.815 [2024-10-09 03:22:00.953419] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:17.815 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.815 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=f465aafa-7b0f-4b33-bbb6-abd1d03b40a2 00:19:17.815 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z f465aafa-7b0f-4b33-bbb6-abd1d03b40a2 ']' 00:19:17.816 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:17.816 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.816 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.816 [2024-10-09 03:22:00.997106] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:17.816 [2024-10-09 03:22:00.997127] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:17.816 [2024-10-09 03:22:00.997197] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:17.816 [2024-10-09 03:22:00.997241] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:17.816 [2024-10-09 03:22:00.997256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:17.816 03:22:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.816 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.816 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.816 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:17.816 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.816 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.816 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:17.816 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:17.816 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:17.816 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:17.816 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.816 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.816 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.816 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:17.816 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:17.816 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.816 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.816 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.816 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:17.816 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:17.816 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.816 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:17.816 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.076 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:18.076 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:18.076 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:19:18.076 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:18.076 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:18.076 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:18.076 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.077 [2024-10-09 03:22:01.136962] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:18.077 [2024-10-09 03:22:01.138976] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:18.077 [2024-10-09 03:22:01.139037] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:18.077 [2024-10-09 03:22:01.139079] bdev_raid.c:3229:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:18.077 [2024-10-09 03:22:01.139093] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:18.077 [2024-10-09 03:22:01.139102] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:18.077 request: 00:19:18.077 { 00:19:18.077 "name": "raid_bdev1", 00:19:18.077 "raid_level": "raid1", 00:19:18.077 "base_bdevs": [ 00:19:18.077 "malloc1", 00:19:18.077 "malloc2" 00:19:18.077 ], 00:19:18.077 "superblock": false, 00:19:18.077 "method": "bdev_raid_create", 00:19:18.077 "req_id": 1 00:19:18.077 } 00:19:18.077 Got JSON-RPC error response 00:19:18.077 response: 00:19:18.077 { 00:19:18.077 "code": -17, 00:19:18.077 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:18.077 } 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.077 [2024-10-09 03:22:01.200891] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:18.077 [2024-10-09 03:22:01.200978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:18.077 [2024-10-09 03:22:01.201006] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:18.077 [2024-10-09 03:22:01.201039] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:18.077 [2024-10-09 03:22:01.202996] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:18.077 [2024-10-09 03:22:01.203064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:18.077 [2024-10-09 03:22:01.203118] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:18.077 [2024-10-09 03:22:01.203192] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:18.077 pt1 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.077 "name": "raid_bdev1", 00:19:18.077 "uuid": "f465aafa-7b0f-4b33-bbb6-abd1d03b40a2", 00:19:18.077 "strip_size_kb": 0, 00:19:18.077 "state": "configuring", 00:19:18.077 "raid_level": "raid1", 00:19:18.077 "superblock": true, 00:19:18.077 "num_base_bdevs": 2, 00:19:18.077 "num_base_bdevs_discovered": 1, 00:19:18.077 "num_base_bdevs_operational": 2, 00:19:18.077 "base_bdevs_list": [ 00:19:18.077 { 00:19:18.077 "name": "pt1", 00:19:18.077 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:18.077 "is_configured": true, 00:19:18.077 "data_offset": 256, 00:19:18.077 "data_size": 7936 00:19:18.077 }, 00:19:18.077 { 00:19:18.077 "name": null, 00:19:18.077 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:18.077 "is_configured": false, 00:19:18.077 "data_offset": 256, 00:19:18.077 "data_size": 7936 00:19:18.077 } 00:19:18.077 ] 00:19:18.077 }' 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.077 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.337 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:18.337 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:18.337 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:18.337 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:18.337 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.337 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.337 [2024-10-09 03:22:01.616695] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:18.337 [2024-10-09 03:22:01.616794] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:18.337 [2024-10-09 03:22:01.616813] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:18.337 [2024-10-09 03:22:01.616823] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:18.337 [2024-10-09 03:22:01.616921] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:18.338 [2024-10-09 03:22:01.616935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:18.338 [2024-10-09 03:22:01.616965] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:18.338 [2024-10-09 03:22:01.616980] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:18.338 [2024-10-09 03:22:01.617039] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:18.338 [2024-10-09 03:22:01.617048] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:18.338 [2024-10-09 03:22:01.617108] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:18.338 [2024-10-09 03:22:01.617162] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:18.338 [2024-10-09 03:22:01.617168] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:18.338 [2024-10-09 03:22:01.617213] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:18.338 pt2 00:19:18.338 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.338 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:18.338 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:18.338 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:18.338 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:18.338 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:18.338 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:18.338 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:18.338 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:18.338 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:18.338 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:18.338 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:18.338 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:18.338 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.338 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.338 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.338 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.598 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.598 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:18.598 "name": "raid_bdev1", 00:19:18.598 "uuid": "f465aafa-7b0f-4b33-bbb6-abd1d03b40a2", 00:19:18.598 "strip_size_kb": 0, 00:19:18.598 "state": "online", 00:19:18.598 "raid_level": "raid1", 00:19:18.598 "superblock": true, 00:19:18.598 "num_base_bdevs": 2, 00:19:18.598 "num_base_bdevs_discovered": 2, 00:19:18.598 "num_base_bdevs_operational": 2, 00:19:18.598 "base_bdevs_list": [ 00:19:18.598 { 00:19:18.598 "name": "pt1", 00:19:18.598 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:18.598 "is_configured": true, 00:19:18.598 "data_offset": 256, 00:19:18.598 "data_size": 7936 00:19:18.598 }, 00:19:18.598 { 00:19:18.598 "name": "pt2", 00:19:18.598 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:18.598 "is_configured": true, 00:19:18.598 "data_offset": 256, 00:19:18.598 "data_size": 7936 00:19:18.598 } 00:19:18.598 ] 00:19:18.598 }' 00:19:18.598 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:18.598 03:22:01 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.858 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:18.858 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:18.858 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:18.858 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:18.858 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:19:18.858 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:18.858 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:18.858 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.858 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:18.858 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:18.858 [2024-10-09 03:22:02.040169] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:18.858 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.858 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:18.858 "name": "raid_bdev1", 00:19:18.858 "aliases": [ 00:19:18.858 "f465aafa-7b0f-4b33-bbb6-abd1d03b40a2" 00:19:18.858 ], 00:19:18.858 "product_name": "Raid Volume", 00:19:18.858 "block_size": 4128, 00:19:18.858 "num_blocks": 7936, 00:19:18.858 "uuid": "f465aafa-7b0f-4b33-bbb6-abd1d03b40a2", 00:19:18.858 "md_size": 32, 00:19:18.858 "md_interleave": true, 00:19:18.858 "dif_type": 0, 00:19:18.858 "assigned_rate_limits": { 00:19:18.858 "rw_ios_per_sec": 0, 00:19:18.858 "rw_mbytes_per_sec": 0, 00:19:18.858 "r_mbytes_per_sec": 0, 00:19:18.858 "w_mbytes_per_sec": 0 00:19:18.858 }, 00:19:18.858 "claimed": false, 00:19:18.858 "zoned": false, 00:19:18.858 "supported_io_types": { 00:19:18.858 "read": true, 00:19:18.858 "write": true, 00:19:18.858 "unmap": false, 00:19:18.858 "flush": false, 00:19:18.858 "reset": true, 00:19:18.858 "nvme_admin": false, 00:19:18.858 "nvme_io": false, 00:19:18.858 "nvme_io_md": false, 00:19:18.858 "write_zeroes": true, 00:19:18.858 "zcopy": false, 00:19:18.858 "get_zone_info": false, 00:19:18.858 "zone_management": false, 00:19:18.858 "zone_append": false, 00:19:18.858 "compare": false, 00:19:18.858 "compare_and_write": false, 00:19:18.858 "abort": false, 00:19:18.858 "seek_hole": false, 00:19:18.858 "seek_data": false, 00:19:18.858 "copy": false, 00:19:18.858 "nvme_iov_md": false 00:19:18.858 }, 00:19:18.858 "memory_domains": [ 00:19:18.858 { 00:19:18.858 "dma_device_id": "system", 00:19:18.858 "dma_device_type": 1 00:19:18.858 }, 00:19:18.858 { 00:19:18.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:18.858 "dma_device_type": 2 00:19:18.858 }, 00:19:18.858 { 00:19:18.858 "dma_device_id": "system", 00:19:18.858 "dma_device_type": 1 00:19:18.858 }, 00:19:18.858 { 00:19:18.858 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:18.858 "dma_device_type": 2 00:19:18.858 } 00:19:18.858 ], 00:19:18.858 "driver_specific": { 00:19:18.858 "raid": { 00:19:18.858 "uuid": "f465aafa-7b0f-4b33-bbb6-abd1d03b40a2", 00:19:18.858 "strip_size_kb": 0, 00:19:18.858 "state": "online", 00:19:18.858 "raid_level": "raid1", 00:19:18.858 "superblock": true, 00:19:18.858 "num_base_bdevs": 2, 00:19:18.858 "num_base_bdevs_discovered": 2, 00:19:18.858 "num_base_bdevs_operational": 2, 00:19:18.858 "base_bdevs_list": [ 00:19:18.858 { 00:19:18.858 "name": "pt1", 00:19:18.858 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:18.858 "is_configured": true, 00:19:18.858 "data_offset": 256, 00:19:18.858 "data_size": 7936 00:19:18.858 }, 00:19:18.858 { 00:19:18.858 "name": "pt2", 00:19:18.858 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:18.858 "is_configured": true, 00:19:18.858 "data_offset": 256, 00:19:18.858 "data_size": 7936 00:19:18.858 } 00:19:18.858 ] 00:19:18.858 } 00:19:18.858 } 00:19:18.858 }' 00:19:18.858 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:18.858 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:18.858 pt2' 00:19:18.858 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:19.116 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:19:19.116 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:19.116 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:19.116 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:19.116 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.116 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.116 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.116 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:19.116 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:19.116 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:19.116 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:19.116 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.116 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.116 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:19.116 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.116 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:19:19.116 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:19:19.116 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:19.116 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:19.116 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.116 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.116 [2024-10-09 03:22:02.275930] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:19.117 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.117 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' f465aafa-7b0f-4b33-bbb6-abd1d03b40a2 '!=' f465aafa-7b0f-4b33-bbb6-abd1d03b40a2 ']' 00:19:19.117 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:19.117 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:19.117 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:19:19.117 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:19.117 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.117 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.117 [2024-10-09 03:22:02.307688] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:19.117 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.117 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:19.117 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:19.117 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:19.117 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:19.117 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:19.117 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:19.117 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.117 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.117 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.117 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.117 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.117 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.117 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.117 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.117 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.117 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.117 "name": "raid_bdev1", 00:19:19.117 "uuid": "f465aafa-7b0f-4b33-bbb6-abd1d03b40a2", 00:19:19.117 "strip_size_kb": 0, 00:19:19.117 "state": "online", 00:19:19.117 "raid_level": "raid1", 00:19:19.117 "superblock": true, 00:19:19.117 "num_base_bdevs": 2, 00:19:19.117 "num_base_bdevs_discovered": 1, 00:19:19.117 "num_base_bdevs_operational": 1, 00:19:19.117 "base_bdevs_list": [ 00:19:19.117 { 00:19:19.117 "name": null, 00:19:19.117 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.117 "is_configured": false, 00:19:19.117 "data_offset": 0, 00:19:19.117 "data_size": 7936 00:19:19.117 }, 00:19:19.117 { 00:19:19.117 "name": "pt2", 00:19:19.117 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:19.117 "is_configured": true, 00:19:19.117 "data_offset": 256, 00:19:19.117 "data_size": 7936 00:19:19.117 } 00:19:19.117 ] 00:19:19.117 }' 00:19:19.117 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.117 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.686 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:19.686 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.686 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.686 [2024-10-09 03:22:02.746922] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:19.686 [2024-10-09 03:22:02.746983] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:19.686 [2024-10-09 03:22:02.747047] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:19.686 [2024-10-09 03:22:02.747090] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:19.686 [2024-10-09 03:22:02.747121] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:19.686 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.686 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:19.686 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.686 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.686 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.686 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.686 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:19.686 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:19.686 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:19.686 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:19.686 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:19.686 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.686 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.686 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.686 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:19.686 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:19.686 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:19.686 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:19.686 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:19:19.686 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:19.686 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.686 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.686 [2024-10-09 03:22:02.802854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:19.686 [2024-10-09 03:22:02.802894] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:19.686 [2024-10-09 03:22:02.802906] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:19.686 [2024-10-09 03:22:02.802915] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:19.686 [2024-10-09 03:22:02.804960] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:19.686 [2024-10-09 03:22:02.804996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:19.686 [2024-10-09 03:22:02.805030] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:19.686 [2024-10-09 03:22:02.805071] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:19.686 [2024-10-09 03:22:02.805113] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:19.686 [2024-10-09 03:22:02.805124] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:19.686 [2024-10-09 03:22:02.805191] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:19.686 [2024-10-09 03:22:02.805244] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:19.686 [2024-10-09 03:22:02.805250] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:19.686 [2024-10-09 03:22:02.805297] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:19.686 pt2 00:19:19.686 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.686 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:19.686 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:19.687 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:19.687 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:19.687 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:19.687 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:19.687 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:19.687 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:19.687 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:19.687 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:19.687 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.687 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.687 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.687 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:19.687 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.687 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:19.687 "name": "raid_bdev1", 00:19:19.687 "uuid": "f465aafa-7b0f-4b33-bbb6-abd1d03b40a2", 00:19:19.687 "strip_size_kb": 0, 00:19:19.687 "state": "online", 00:19:19.687 "raid_level": "raid1", 00:19:19.687 "superblock": true, 00:19:19.687 "num_base_bdevs": 2, 00:19:19.687 "num_base_bdevs_discovered": 1, 00:19:19.687 "num_base_bdevs_operational": 1, 00:19:19.687 "base_bdevs_list": [ 00:19:19.687 { 00:19:19.687 "name": null, 00:19:19.687 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:19.687 "is_configured": false, 00:19:19.687 "data_offset": 256, 00:19:19.687 "data_size": 7936 00:19:19.687 }, 00:19:19.687 { 00:19:19.687 "name": "pt2", 00:19:19.687 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:19.687 "is_configured": true, 00:19:19.687 "data_offset": 256, 00:19:19.687 "data_size": 7936 00:19:19.687 } 00:19:19.687 ] 00:19:19.687 }' 00:19:19.687 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:19.687 03:22:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.257 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:20.257 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.257 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.257 [2024-10-09 03:22:03.273964] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:20.257 [2024-10-09 03:22:03.274030] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:20.257 [2024-10-09 03:22:03.274085] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:20.257 [2024-10-09 03:22:03.274131] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:20.257 [2024-10-09 03:22:03.274160] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:20.257 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.257 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.257 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:20.257 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.257 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.257 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.257 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:20.257 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:20.257 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:20.257 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:20.257 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.257 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.257 [2024-10-09 03:22:03.333949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:20.257 [2024-10-09 03:22:03.334032] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:20.257 [2024-10-09 03:22:03.334061] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:20.257 [2024-10-09 03:22:03.334083] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:20.257 [2024-10-09 03:22:03.336057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:20.257 [2024-10-09 03:22:03.336122] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:20.257 [2024-10-09 03:22:03.336178] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:20.257 [2024-10-09 03:22:03.336223] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:20.257 [2024-10-09 03:22:03.336306] bdev_raid.c:3675:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:20.257 [2024-10-09 03:22:03.336350] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:20.257 [2024-10-09 03:22:03.336416] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:20.257 [2024-10-09 03:22:03.336509] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:20.257 [2024-10-09 03:22:03.336598] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:20.257 [2024-10-09 03:22:03.336632] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:20.257 [2024-10-09 03:22:03.336694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:20.257 [2024-10-09 03:22:03.336772] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:20.257 [2024-10-09 03:22:03.336820] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:20.257 [2024-10-09 03:22:03.336918] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:20.257 pt1 00:19:20.257 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.257 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:20.257 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:20.257 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:20.257 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:20.257 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:20.257 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:20.257 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:20.257 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:20.257 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:20.257 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:20.258 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:20.258 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:20.258 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.258 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.258 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:20.258 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.258 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:20.258 "name": "raid_bdev1", 00:19:20.258 "uuid": "f465aafa-7b0f-4b33-bbb6-abd1d03b40a2", 00:19:20.258 "strip_size_kb": 0, 00:19:20.258 "state": "online", 00:19:20.258 "raid_level": "raid1", 00:19:20.258 "superblock": true, 00:19:20.258 "num_base_bdevs": 2, 00:19:20.258 "num_base_bdevs_discovered": 1, 00:19:20.258 "num_base_bdevs_operational": 1, 00:19:20.258 "base_bdevs_list": [ 00:19:20.258 { 00:19:20.258 "name": null, 00:19:20.258 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:20.258 "is_configured": false, 00:19:20.258 "data_offset": 256, 00:19:20.258 "data_size": 7936 00:19:20.258 }, 00:19:20.258 { 00:19:20.258 "name": "pt2", 00:19:20.258 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:20.258 "is_configured": true, 00:19:20.258 "data_offset": 256, 00:19:20.258 "data_size": 7936 00:19:20.258 } 00:19:20.258 ] 00:19:20.258 }' 00:19:20.258 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:20.258 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.518 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:20.518 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:20.518 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.518 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.518 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.518 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:20.518 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:20.518 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:20.518 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.518 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:20.518 [2024-10-09 03:22:03.785478] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:20.518 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.778 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' f465aafa-7b0f-4b33-bbb6-abd1d03b40a2 '!=' f465aafa-7b0f-4b33-bbb6-abd1d03b40a2 ']' 00:19:20.778 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89003 00:19:20.778 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 89003 ']' 00:19:20.778 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 89003 00:19:20.778 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:19:20.778 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:20.778 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89003 00:19:20.778 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:20.778 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:20.778 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89003' 00:19:20.778 killing process with pid 89003 00:19:20.778 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@969 -- # kill 89003 00:19:20.778 [2024-10-09 03:22:03.862980] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:20.778 [2024-10-09 03:22:03.863044] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:20.778 [2024-10-09 03:22:03.863073] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:20.778 [2024-10-09 03:22:03.863085] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:20.778 03:22:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@974 -- # wait 89003 00:19:20.778 [2024-10-09 03:22:04.077080] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:22.160 03:22:05 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:19:22.160 00:19:22.160 real 0m6.139s 00:19:22.160 user 0m8.975s 00:19:22.160 sys 0m1.188s 00:19:22.160 03:22:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:22.160 03:22:05 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.160 ************************************ 00:19:22.160 END TEST raid_superblock_test_md_interleaved 00:19:22.160 ************************************ 00:19:22.160 03:22:05 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:19:22.160 03:22:05 bdev_raid -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:19:22.160 03:22:05 bdev_raid -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:22.160 03:22:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:22.160 ************************************ 00:19:22.160 START TEST raid_rebuild_test_sb_md_interleaved 00:19:22.160 ************************************ 00:19:22.160 03:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1125 -- # raid_rebuild_test raid1 2 true false false 00:19:22.160 03:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:22.160 03:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:22.160 03:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:22.160 03:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:22.160 03:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:19:22.160 03:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:22.160 03:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:22.160 03:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:22.160 03:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:22.161 03:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:22.161 03:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:22.161 03:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:22.161 03:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:22.161 03:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:22.161 03:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:22.161 03:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:22.161 03:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:22.161 03:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:22.161 03:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:22.161 03:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:22.161 03:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:22.161 03:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:22.161 03:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:22.161 03:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:22.161 03:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89330 00:19:22.161 03:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89330 00:19:22.161 03:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:22.161 03:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@831 -- # '[' -z 89330 ']' 00:19:22.161 03:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.421 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.421 03:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:22.421 03:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.421 03:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:22.421 03:22:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:22.421 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:22.421 Zero copy mechanism will not be used. 00:19:22.421 [2024-10-09 03:22:05.547799] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:19:22.421 [2024-10-09 03:22:05.547954] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89330 ] 00:19:22.421 [2024-10-09 03:22:05.712559] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.681 [2024-10-09 03:22:05.955701] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.941 [2024-10-09 03:22:06.185339] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:22.941 [2024-10-09 03:22:06.185459] bdev_raid.c:1452:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:23.201 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:23.201 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # return 0 00:19:23.201 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:23.201 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:19:23.201 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.201 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.201 BaseBdev1_malloc 00:19:23.201 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.201 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:23.201 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.201 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.201 [2024-10-09 03:22:06.423918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:23.201 [2024-10-09 03:22:06.424063] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.201 [2024-10-09 03:22:06.424108] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:23.201 [2024-10-09 03:22:06.424140] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.201 [2024-10-09 03:22:06.426222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.201 [2024-10-09 03:22:06.426301] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:23.201 BaseBdev1 00:19:23.201 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.201 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:23.201 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:19:23.201 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.201 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.462 BaseBdev2_malloc 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.462 [2024-10-09 03:22:06.516947] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:23.462 [2024-10-09 03:22:06.517060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.462 [2024-10-09 03:22:06.517098] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:23.462 [2024-10-09 03:22:06.517128] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.462 [2024-10-09 03:22:06.519159] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.462 [2024-10-09 03:22:06.519233] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:23.462 BaseBdev2 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.462 spare_malloc 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.462 spare_delay 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.462 [2024-10-09 03:22:06.589195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:23.462 [2024-10-09 03:22:06.589253] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.462 [2024-10-09 03:22:06.589273] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:23.462 [2024-10-09 03:22:06.589285] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.462 [2024-10-09 03:22:06.591401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.462 [2024-10-09 03:22:06.591474] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:23.462 spare 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.462 [2024-10-09 03:22:06.601239] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:23.462 [2024-10-09 03:22:06.603202] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:23.462 [2024-10-09 03:22:06.603403] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:23.462 [2024-10-09 03:22:06.603418] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:23.462 [2024-10-09 03:22:06.603493] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:23.462 [2024-10-09 03:22:06.603562] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:23.462 [2024-10-09 03:22:06.603575] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:23.462 [2024-10-09 03:22:06.603642] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.462 "name": "raid_bdev1", 00:19:23.462 "uuid": "95cddd0f-ca7c-444e-b6a6-347de701a40e", 00:19:23.462 "strip_size_kb": 0, 00:19:23.462 "state": "online", 00:19:23.462 "raid_level": "raid1", 00:19:23.462 "superblock": true, 00:19:23.462 "num_base_bdevs": 2, 00:19:23.462 "num_base_bdevs_discovered": 2, 00:19:23.462 "num_base_bdevs_operational": 2, 00:19:23.462 "base_bdevs_list": [ 00:19:23.462 { 00:19:23.462 "name": "BaseBdev1", 00:19:23.462 "uuid": "a41952da-635a-53ff-b4d8-71e833768b92", 00:19:23.462 "is_configured": true, 00:19:23.462 "data_offset": 256, 00:19:23.462 "data_size": 7936 00:19:23.462 }, 00:19:23.462 { 00:19:23.462 "name": "BaseBdev2", 00:19:23.462 "uuid": "8cbefd9f-1575-5a01-9d25-938f9094b547", 00:19:23.462 "is_configured": true, 00:19:23.462 "data_offset": 256, 00:19:23.462 "data_size": 7936 00:19:23.462 } 00:19:23.462 ] 00:19:23.462 }' 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.462 03:22:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.722 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:23.722 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:23.722 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.722 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.722 [2024-10-09 03:22:07.016823] bdev_raid.c:1129:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:23.983 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.983 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:23.983 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.983 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.983 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.983 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:23.983 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.983 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:23.983 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:23.983 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:19:23.983 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:23.983 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.983 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.983 [2024-10-09 03:22:07.116403] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:23.983 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.983 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:23.983 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:23.983 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:23.983 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:23.983 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:23.983 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:23.983 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.983 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.983 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.983 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.983 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.983 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.983 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:23.983 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.983 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.983 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.983 "name": "raid_bdev1", 00:19:23.983 "uuid": "95cddd0f-ca7c-444e-b6a6-347de701a40e", 00:19:23.983 "strip_size_kb": 0, 00:19:23.983 "state": "online", 00:19:23.983 "raid_level": "raid1", 00:19:23.983 "superblock": true, 00:19:23.983 "num_base_bdevs": 2, 00:19:23.983 "num_base_bdevs_discovered": 1, 00:19:23.983 "num_base_bdevs_operational": 1, 00:19:23.983 "base_bdevs_list": [ 00:19:23.983 { 00:19:23.983 "name": null, 00:19:23.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:23.983 "is_configured": false, 00:19:23.983 "data_offset": 0, 00:19:23.983 "data_size": 7936 00:19:23.983 }, 00:19:23.983 { 00:19:23.983 "name": "BaseBdev2", 00:19:23.983 "uuid": "8cbefd9f-1575-5a01-9d25-938f9094b547", 00:19:23.983 "is_configured": true, 00:19:23.983 "data_offset": 256, 00:19:23.983 "data_size": 7936 00:19:23.983 } 00:19:23.983 ] 00:19:23.983 }' 00:19:23.983 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.983 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.553 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:24.553 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.553 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:24.553 [2024-10-09 03:22:07.583918] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:24.553 [2024-10-09 03:22:07.600163] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:24.553 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.553 03:22:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:24.553 [2024-10-09 03:22:07.602208] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:25.497 03:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:25.497 03:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:25.497 03:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:25.497 03:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:25.497 03:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:25.497 03:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.497 03:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.497 03:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.497 03:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.497 03:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.497 03:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:25.497 "name": "raid_bdev1", 00:19:25.497 "uuid": "95cddd0f-ca7c-444e-b6a6-347de701a40e", 00:19:25.497 "strip_size_kb": 0, 00:19:25.497 "state": "online", 00:19:25.497 "raid_level": "raid1", 00:19:25.497 "superblock": true, 00:19:25.497 "num_base_bdevs": 2, 00:19:25.497 "num_base_bdevs_discovered": 2, 00:19:25.497 "num_base_bdevs_operational": 2, 00:19:25.497 "process": { 00:19:25.497 "type": "rebuild", 00:19:25.497 "target": "spare", 00:19:25.497 "progress": { 00:19:25.497 "blocks": 2560, 00:19:25.497 "percent": 32 00:19:25.497 } 00:19:25.497 }, 00:19:25.497 "base_bdevs_list": [ 00:19:25.497 { 00:19:25.497 "name": "spare", 00:19:25.497 "uuid": "9ec2b5ea-536a-5eec-b68d-4724129df4f7", 00:19:25.497 "is_configured": true, 00:19:25.497 "data_offset": 256, 00:19:25.497 "data_size": 7936 00:19:25.497 }, 00:19:25.497 { 00:19:25.497 "name": "BaseBdev2", 00:19:25.497 "uuid": "8cbefd9f-1575-5a01-9d25-938f9094b547", 00:19:25.497 "is_configured": true, 00:19:25.497 "data_offset": 256, 00:19:25.497 "data_size": 7936 00:19:25.497 } 00:19:25.497 ] 00:19:25.497 }' 00:19:25.497 03:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:25.497 03:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:25.497 03:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:25.497 03:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:25.497 03:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:25.497 03:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.497 03:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.497 [2024-10-09 03:22:08.761191] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:25.757 [2024-10-09 03:22:08.810820] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:25.757 [2024-10-09 03:22:08.810888] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:25.757 [2024-10-09 03:22:08.810902] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:25.757 [2024-10-09 03:22:08.810913] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:25.757 03:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.757 03:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:25.757 03:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:25.757 03:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:25.757 03:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:25.757 03:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:25.757 03:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:25.757 03:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:25.757 03:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:25.757 03:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:25.757 03:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:25.757 03:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:25.757 03:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:25.757 03:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.757 03:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:25.757 03:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.757 03:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:25.757 "name": "raid_bdev1", 00:19:25.757 "uuid": "95cddd0f-ca7c-444e-b6a6-347de701a40e", 00:19:25.757 "strip_size_kb": 0, 00:19:25.757 "state": "online", 00:19:25.757 "raid_level": "raid1", 00:19:25.757 "superblock": true, 00:19:25.757 "num_base_bdevs": 2, 00:19:25.757 "num_base_bdevs_discovered": 1, 00:19:25.757 "num_base_bdevs_operational": 1, 00:19:25.757 "base_bdevs_list": [ 00:19:25.757 { 00:19:25.757 "name": null, 00:19:25.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:25.757 "is_configured": false, 00:19:25.757 "data_offset": 0, 00:19:25.757 "data_size": 7936 00:19:25.757 }, 00:19:25.757 { 00:19:25.757 "name": "BaseBdev2", 00:19:25.757 "uuid": "8cbefd9f-1575-5a01-9d25-938f9094b547", 00:19:25.757 "is_configured": true, 00:19:25.757 "data_offset": 256, 00:19:25.757 "data_size": 7936 00:19:25.757 } 00:19:25.757 ] 00:19:25.757 }' 00:19:25.757 03:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:25.757 03:22:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.016 03:22:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:26.016 03:22:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:26.016 03:22:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:26.016 03:22:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:26.016 03:22:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:26.016 03:22:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.016 03:22:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.016 03:22:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.016 03:22:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.276 03:22:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.276 03:22:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:26.276 "name": "raid_bdev1", 00:19:26.276 "uuid": "95cddd0f-ca7c-444e-b6a6-347de701a40e", 00:19:26.276 "strip_size_kb": 0, 00:19:26.276 "state": "online", 00:19:26.276 "raid_level": "raid1", 00:19:26.276 "superblock": true, 00:19:26.276 "num_base_bdevs": 2, 00:19:26.276 "num_base_bdevs_discovered": 1, 00:19:26.276 "num_base_bdevs_operational": 1, 00:19:26.276 "base_bdevs_list": [ 00:19:26.276 { 00:19:26.276 "name": null, 00:19:26.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.276 "is_configured": false, 00:19:26.276 "data_offset": 0, 00:19:26.276 "data_size": 7936 00:19:26.276 }, 00:19:26.276 { 00:19:26.276 "name": "BaseBdev2", 00:19:26.276 "uuid": "8cbefd9f-1575-5a01-9d25-938f9094b547", 00:19:26.276 "is_configured": true, 00:19:26.276 "data_offset": 256, 00:19:26.276 "data_size": 7936 00:19:26.276 } 00:19:26.276 ] 00:19:26.276 }' 00:19:26.276 03:22:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:26.276 03:22:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:26.276 03:22:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:26.276 03:22:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:26.276 03:22:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:26.276 03:22:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.276 03:22:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:26.276 [2024-10-09 03:22:09.450539] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:26.276 [2024-10-09 03:22:09.464914] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:26.276 03:22:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.276 03:22:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:26.276 [2024-10-09 03:22:09.467007] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:27.217 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:27.217 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:27.217 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:27.217 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:27.217 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:27.217 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.217 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.217 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.217 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.217 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.477 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:27.477 "name": "raid_bdev1", 00:19:27.477 "uuid": "95cddd0f-ca7c-444e-b6a6-347de701a40e", 00:19:27.477 "strip_size_kb": 0, 00:19:27.477 "state": "online", 00:19:27.477 "raid_level": "raid1", 00:19:27.477 "superblock": true, 00:19:27.477 "num_base_bdevs": 2, 00:19:27.477 "num_base_bdevs_discovered": 2, 00:19:27.477 "num_base_bdevs_operational": 2, 00:19:27.477 "process": { 00:19:27.477 "type": "rebuild", 00:19:27.477 "target": "spare", 00:19:27.477 "progress": { 00:19:27.477 "blocks": 2560, 00:19:27.477 "percent": 32 00:19:27.477 } 00:19:27.477 }, 00:19:27.477 "base_bdevs_list": [ 00:19:27.477 { 00:19:27.477 "name": "spare", 00:19:27.477 "uuid": "9ec2b5ea-536a-5eec-b68d-4724129df4f7", 00:19:27.477 "is_configured": true, 00:19:27.477 "data_offset": 256, 00:19:27.477 "data_size": 7936 00:19:27.477 }, 00:19:27.477 { 00:19:27.477 "name": "BaseBdev2", 00:19:27.477 "uuid": "8cbefd9f-1575-5a01-9d25-938f9094b547", 00:19:27.477 "is_configured": true, 00:19:27.477 "data_offset": 256, 00:19:27.477 "data_size": 7936 00:19:27.477 } 00:19:27.477 ] 00:19:27.477 }' 00:19:27.477 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:27.477 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:27.477 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:27.477 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:27.477 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:27.477 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:27.477 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:27.477 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:27.477 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:27.477 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:27.477 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=757 00:19:27.477 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:27.477 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:27.477 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:27.477 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:27.477 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:27.477 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:27.477 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.477 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.477 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:27.477 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.477 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.477 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:27.477 "name": "raid_bdev1", 00:19:27.477 "uuid": "95cddd0f-ca7c-444e-b6a6-347de701a40e", 00:19:27.477 "strip_size_kb": 0, 00:19:27.477 "state": "online", 00:19:27.477 "raid_level": "raid1", 00:19:27.477 "superblock": true, 00:19:27.477 "num_base_bdevs": 2, 00:19:27.477 "num_base_bdevs_discovered": 2, 00:19:27.477 "num_base_bdevs_operational": 2, 00:19:27.477 "process": { 00:19:27.477 "type": "rebuild", 00:19:27.477 "target": "spare", 00:19:27.477 "progress": { 00:19:27.477 "blocks": 2816, 00:19:27.477 "percent": 35 00:19:27.477 } 00:19:27.477 }, 00:19:27.477 "base_bdevs_list": [ 00:19:27.477 { 00:19:27.477 "name": "spare", 00:19:27.477 "uuid": "9ec2b5ea-536a-5eec-b68d-4724129df4f7", 00:19:27.477 "is_configured": true, 00:19:27.477 "data_offset": 256, 00:19:27.477 "data_size": 7936 00:19:27.477 }, 00:19:27.477 { 00:19:27.477 "name": "BaseBdev2", 00:19:27.477 "uuid": "8cbefd9f-1575-5a01-9d25-938f9094b547", 00:19:27.477 "is_configured": true, 00:19:27.477 "data_offset": 256, 00:19:27.477 "data_size": 7936 00:19:27.477 } 00:19:27.477 ] 00:19:27.477 }' 00:19:27.477 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:27.477 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:27.477 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:27.477 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:27.478 03:22:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:28.861 03:22:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:28.861 03:22:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:28.861 03:22:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:28.861 03:22:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:28.861 03:22:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:28.861 03:22:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:28.861 03:22:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.861 03:22:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.861 03:22:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.861 03:22:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:28.861 03:22:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.861 03:22:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:28.861 "name": "raid_bdev1", 00:19:28.861 "uuid": "95cddd0f-ca7c-444e-b6a6-347de701a40e", 00:19:28.861 "strip_size_kb": 0, 00:19:28.861 "state": "online", 00:19:28.861 "raid_level": "raid1", 00:19:28.861 "superblock": true, 00:19:28.861 "num_base_bdevs": 2, 00:19:28.861 "num_base_bdevs_discovered": 2, 00:19:28.861 "num_base_bdevs_operational": 2, 00:19:28.861 "process": { 00:19:28.861 "type": "rebuild", 00:19:28.861 "target": "spare", 00:19:28.861 "progress": { 00:19:28.861 "blocks": 5632, 00:19:28.861 "percent": 70 00:19:28.861 } 00:19:28.861 }, 00:19:28.861 "base_bdevs_list": [ 00:19:28.861 { 00:19:28.861 "name": "spare", 00:19:28.861 "uuid": "9ec2b5ea-536a-5eec-b68d-4724129df4f7", 00:19:28.861 "is_configured": true, 00:19:28.861 "data_offset": 256, 00:19:28.861 "data_size": 7936 00:19:28.861 }, 00:19:28.861 { 00:19:28.861 "name": "BaseBdev2", 00:19:28.861 "uuid": "8cbefd9f-1575-5a01-9d25-938f9094b547", 00:19:28.861 "is_configured": true, 00:19:28.861 "data_offset": 256, 00:19:28.861 "data_size": 7936 00:19:28.861 } 00:19:28.861 ] 00:19:28.861 }' 00:19:28.861 03:22:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:28.861 03:22:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:28.861 03:22:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:28.861 03:22:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:28.861 03:22:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:29.431 [2024-10-09 03:22:12.586564] bdev_raid.c:2896:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:29.431 [2024-10-09 03:22:12.586718] bdev_raid.c:2558:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:29.431 [2024-10-09 03:22:12.586829] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:29.690 03:22:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:29.690 03:22:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:29.690 03:22:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:29.690 03:22:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:29.690 03:22:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:29.690 03:22:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:29.690 03:22:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.690 03:22:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.690 03:22:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.690 03:22:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.690 03:22:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.690 03:22:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:29.690 "name": "raid_bdev1", 00:19:29.690 "uuid": "95cddd0f-ca7c-444e-b6a6-347de701a40e", 00:19:29.690 "strip_size_kb": 0, 00:19:29.690 "state": "online", 00:19:29.690 "raid_level": "raid1", 00:19:29.690 "superblock": true, 00:19:29.690 "num_base_bdevs": 2, 00:19:29.690 "num_base_bdevs_discovered": 2, 00:19:29.690 "num_base_bdevs_operational": 2, 00:19:29.690 "base_bdevs_list": [ 00:19:29.690 { 00:19:29.690 "name": "spare", 00:19:29.690 "uuid": "9ec2b5ea-536a-5eec-b68d-4724129df4f7", 00:19:29.690 "is_configured": true, 00:19:29.690 "data_offset": 256, 00:19:29.690 "data_size": 7936 00:19:29.690 }, 00:19:29.690 { 00:19:29.690 "name": "BaseBdev2", 00:19:29.691 "uuid": "8cbefd9f-1575-5a01-9d25-938f9094b547", 00:19:29.691 "is_configured": true, 00:19:29.691 "data_offset": 256, 00:19:29.691 "data_size": 7936 00:19:29.691 } 00:19:29.691 ] 00:19:29.691 }' 00:19:29.691 03:22:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:29.691 03:22:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:29.691 03:22:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:29.951 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:29.951 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:19:29.951 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:29.951 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:29.951 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:29.951 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:29.951 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:29.951 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.951 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.951 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.951 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.951 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.951 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:29.951 "name": "raid_bdev1", 00:19:29.951 "uuid": "95cddd0f-ca7c-444e-b6a6-347de701a40e", 00:19:29.951 "strip_size_kb": 0, 00:19:29.951 "state": "online", 00:19:29.951 "raid_level": "raid1", 00:19:29.951 "superblock": true, 00:19:29.951 "num_base_bdevs": 2, 00:19:29.951 "num_base_bdevs_discovered": 2, 00:19:29.951 "num_base_bdevs_operational": 2, 00:19:29.951 "base_bdevs_list": [ 00:19:29.951 { 00:19:29.951 "name": "spare", 00:19:29.951 "uuid": "9ec2b5ea-536a-5eec-b68d-4724129df4f7", 00:19:29.951 "is_configured": true, 00:19:29.951 "data_offset": 256, 00:19:29.951 "data_size": 7936 00:19:29.951 }, 00:19:29.951 { 00:19:29.951 "name": "BaseBdev2", 00:19:29.951 "uuid": "8cbefd9f-1575-5a01-9d25-938f9094b547", 00:19:29.951 "is_configured": true, 00:19:29.951 "data_offset": 256, 00:19:29.951 "data_size": 7936 00:19:29.951 } 00:19:29.951 ] 00:19:29.951 }' 00:19:29.951 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:29.951 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:29.951 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:29.951 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:29.951 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:29.951 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:29.951 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:29.951 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:29.951 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:29.951 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:29.951 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:29.951 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:29.951 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:29.951 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:29.951 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:29.951 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:29.951 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.951 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:29.951 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.951 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:29.951 "name": "raid_bdev1", 00:19:29.951 "uuid": "95cddd0f-ca7c-444e-b6a6-347de701a40e", 00:19:29.951 "strip_size_kb": 0, 00:19:29.951 "state": "online", 00:19:29.951 "raid_level": "raid1", 00:19:29.951 "superblock": true, 00:19:29.951 "num_base_bdevs": 2, 00:19:29.951 "num_base_bdevs_discovered": 2, 00:19:29.951 "num_base_bdevs_operational": 2, 00:19:29.951 "base_bdevs_list": [ 00:19:29.951 { 00:19:29.951 "name": "spare", 00:19:29.951 "uuid": "9ec2b5ea-536a-5eec-b68d-4724129df4f7", 00:19:29.951 "is_configured": true, 00:19:29.951 "data_offset": 256, 00:19:29.951 "data_size": 7936 00:19:29.951 }, 00:19:29.951 { 00:19:29.951 "name": "BaseBdev2", 00:19:29.951 "uuid": "8cbefd9f-1575-5a01-9d25-938f9094b547", 00:19:29.951 "is_configured": true, 00:19:29.951 "data_offset": 256, 00:19:29.951 "data_size": 7936 00:19:29.951 } 00:19:29.951 ] 00:19:29.951 }' 00:19:29.951 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:29.951 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.522 [2024-10-09 03:22:13.636279] bdev_raid.c:2407:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:30.522 [2024-10-09 03:22:13.636374] bdev_raid.c:1895:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:30.522 [2024-10-09 03:22:13.636466] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:30.522 [2024-10-09 03:22:13.636540] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:30.522 [2024-10-09 03:22:13.636571] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.522 [2024-10-09 03:22:13.708157] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:30.522 [2024-10-09 03:22:13.708211] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:30.522 [2024-10-09 03:22:13.708236] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:19:30.522 [2024-10-09 03:22:13.708245] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:30.522 [2024-10-09 03:22:13.710449] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:30.522 [2024-10-09 03:22:13.710484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:30.522 [2024-10-09 03:22:13.710533] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:30.522 [2024-10-09 03:22:13.710584] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:30.522 [2024-10-09 03:22:13.710684] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:30.522 spare 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.522 [2024-10-09 03:22:13.810574] bdev_raid.c:1730:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:30.522 [2024-10-09 03:22:13.810602] bdev_raid.c:1731:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:19:30.522 [2024-10-09 03:22:13.810691] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:19:30.522 [2024-10-09 03:22:13.810765] bdev_raid.c:1760:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:30.522 [2024-10-09 03:22:13.810772] bdev_raid.c:1761:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:30.522 [2024-10-09 03:22:13.810860] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.522 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:30.782 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.782 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.782 "name": "raid_bdev1", 00:19:30.782 "uuid": "95cddd0f-ca7c-444e-b6a6-347de701a40e", 00:19:30.782 "strip_size_kb": 0, 00:19:30.782 "state": "online", 00:19:30.782 "raid_level": "raid1", 00:19:30.782 "superblock": true, 00:19:30.782 "num_base_bdevs": 2, 00:19:30.782 "num_base_bdevs_discovered": 2, 00:19:30.782 "num_base_bdevs_operational": 2, 00:19:30.782 "base_bdevs_list": [ 00:19:30.782 { 00:19:30.782 "name": "spare", 00:19:30.782 "uuid": "9ec2b5ea-536a-5eec-b68d-4724129df4f7", 00:19:30.782 "is_configured": true, 00:19:30.783 "data_offset": 256, 00:19:30.783 "data_size": 7936 00:19:30.783 }, 00:19:30.783 { 00:19:30.783 "name": "BaseBdev2", 00:19:30.783 "uuid": "8cbefd9f-1575-5a01-9d25-938f9094b547", 00:19:30.783 "is_configured": true, 00:19:30.783 "data_offset": 256, 00:19:30.783 "data_size": 7936 00:19:30.783 } 00:19:30.783 ] 00:19:30.783 }' 00:19:30.783 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.783 03:22:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:31.043 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:31.043 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:31.043 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:31.043 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:31.043 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:31.043 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.043 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.043 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:31.043 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.043 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.043 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:31.043 "name": "raid_bdev1", 00:19:31.043 "uuid": "95cddd0f-ca7c-444e-b6a6-347de701a40e", 00:19:31.043 "strip_size_kb": 0, 00:19:31.043 "state": "online", 00:19:31.043 "raid_level": "raid1", 00:19:31.043 "superblock": true, 00:19:31.043 "num_base_bdevs": 2, 00:19:31.043 "num_base_bdevs_discovered": 2, 00:19:31.043 "num_base_bdevs_operational": 2, 00:19:31.043 "base_bdevs_list": [ 00:19:31.043 { 00:19:31.043 "name": "spare", 00:19:31.043 "uuid": "9ec2b5ea-536a-5eec-b68d-4724129df4f7", 00:19:31.043 "is_configured": true, 00:19:31.043 "data_offset": 256, 00:19:31.043 "data_size": 7936 00:19:31.043 }, 00:19:31.043 { 00:19:31.043 "name": "BaseBdev2", 00:19:31.043 "uuid": "8cbefd9f-1575-5a01-9d25-938f9094b547", 00:19:31.043 "is_configured": true, 00:19:31.043 "data_offset": 256, 00:19:31.043 "data_size": 7936 00:19:31.043 } 00:19:31.043 ] 00:19:31.043 }' 00:19:31.043 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:31.043 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:31.043 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:31.302 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:31.302 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:31.302 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.302 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.302 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:31.302 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.302 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:31.303 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:31.303 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.303 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:31.303 [2024-10-09 03:22:14.414971] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:31.303 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.303 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:31.303 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:31.303 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:31.303 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:31.303 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:31.303 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:31.303 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.303 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.303 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.303 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.303 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.303 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.303 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.303 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:31.303 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.303 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.303 "name": "raid_bdev1", 00:19:31.303 "uuid": "95cddd0f-ca7c-444e-b6a6-347de701a40e", 00:19:31.303 "strip_size_kb": 0, 00:19:31.303 "state": "online", 00:19:31.303 "raid_level": "raid1", 00:19:31.303 "superblock": true, 00:19:31.303 "num_base_bdevs": 2, 00:19:31.303 "num_base_bdevs_discovered": 1, 00:19:31.303 "num_base_bdevs_operational": 1, 00:19:31.303 "base_bdevs_list": [ 00:19:31.303 { 00:19:31.303 "name": null, 00:19:31.303 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.303 "is_configured": false, 00:19:31.303 "data_offset": 0, 00:19:31.303 "data_size": 7936 00:19:31.303 }, 00:19:31.303 { 00:19:31.303 "name": "BaseBdev2", 00:19:31.303 "uuid": "8cbefd9f-1575-5a01-9d25-938f9094b547", 00:19:31.303 "is_configured": true, 00:19:31.303 "data_offset": 256, 00:19:31.303 "data_size": 7936 00:19:31.303 } 00:19:31.303 ] 00:19:31.303 }' 00:19:31.303 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.303 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:31.563 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:31.563 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.563 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:31.563 [2024-10-09 03:22:14.834495] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:31.563 [2024-10-09 03:22:14.834613] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:31.563 [2024-10-09 03:22:14.834630] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:31.563 [2024-10-09 03:22:14.834659] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:31.563 [2024-10-09 03:22:14.848430] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:31.563 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.563 03:22:14 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:31.563 [2024-10-09 03:22:14.850482] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:32.945 03:22:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:32.945 03:22:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:32.945 03:22:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:32.945 03:22:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:32.945 03:22:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:32.945 03:22:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.945 03:22:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.945 03:22:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.945 03:22:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:32.945 03:22:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.945 03:22:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:32.945 "name": "raid_bdev1", 00:19:32.945 "uuid": "95cddd0f-ca7c-444e-b6a6-347de701a40e", 00:19:32.945 "strip_size_kb": 0, 00:19:32.945 "state": "online", 00:19:32.945 "raid_level": "raid1", 00:19:32.945 "superblock": true, 00:19:32.945 "num_base_bdevs": 2, 00:19:32.945 "num_base_bdevs_discovered": 2, 00:19:32.945 "num_base_bdevs_operational": 2, 00:19:32.945 "process": { 00:19:32.945 "type": "rebuild", 00:19:32.945 "target": "spare", 00:19:32.945 "progress": { 00:19:32.945 "blocks": 2560, 00:19:32.945 "percent": 32 00:19:32.945 } 00:19:32.945 }, 00:19:32.945 "base_bdevs_list": [ 00:19:32.945 { 00:19:32.945 "name": "spare", 00:19:32.945 "uuid": "9ec2b5ea-536a-5eec-b68d-4724129df4f7", 00:19:32.945 "is_configured": true, 00:19:32.945 "data_offset": 256, 00:19:32.945 "data_size": 7936 00:19:32.945 }, 00:19:32.945 { 00:19:32.945 "name": "BaseBdev2", 00:19:32.945 "uuid": "8cbefd9f-1575-5a01-9d25-938f9094b547", 00:19:32.945 "is_configured": true, 00:19:32.945 "data_offset": 256, 00:19:32.945 "data_size": 7936 00:19:32.945 } 00:19:32.945 ] 00:19:32.945 }' 00:19:32.945 03:22:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:32.945 03:22:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:32.945 03:22:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:32.945 03:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:32.945 03:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:32.945 03:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.945 03:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:32.945 [2024-10-09 03:22:16.014416] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:32.945 [2024-10-09 03:22:16.058302] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:32.945 [2024-10-09 03:22:16.058414] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:32.945 [2024-10-09 03:22:16.058429] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:32.945 [2024-10-09 03:22:16.058439] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:32.945 03:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.945 03:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:32.945 03:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:32.946 03:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:32.946 03:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:32.946 03:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:32.946 03:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:32.946 03:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:32.946 03:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:32.946 03:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:32.946 03:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:32.946 03:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.946 03:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.946 03:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.946 03:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:32.946 03:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.946 03:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:32.946 "name": "raid_bdev1", 00:19:32.946 "uuid": "95cddd0f-ca7c-444e-b6a6-347de701a40e", 00:19:32.946 "strip_size_kb": 0, 00:19:32.946 "state": "online", 00:19:32.946 "raid_level": "raid1", 00:19:32.946 "superblock": true, 00:19:32.946 "num_base_bdevs": 2, 00:19:32.946 "num_base_bdevs_discovered": 1, 00:19:32.946 "num_base_bdevs_operational": 1, 00:19:32.946 "base_bdevs_list": [ 00:19:32.946 { 00:19:32.946 "name": null, 00:19:32.946 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.946 "is_configured": false, 00:19:32.946 "data_offset": 0, 00:19:32.946 "data_size": 7936 00:19:32.946 }, 00:19:32.946 { 00:19:32.946 "name": "BaseBdev2", 00:19:32.946 "uuid": "8cbefd9f-1575-5a01-9d25-938f9094b547", 00:19:32.946 "is_configured": true, 00:19:32.946 "data_offset": 256, 00:19:32.946 "data_size": 7936 00:19:32.946 } 00:19:32.946 ] 00:19:32.946 }' 00:19:32.946 03:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:32.946 03:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:33.206 03:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:33.206 03:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.206 03:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:33.206 [2024-10-09 03:22:16.485537] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:33.206 [2024-10-09 03:22:16.485643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:33.206 [2024-10-09 03:22:16.485687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:33.206 [2024-10-09 03:22:16.485718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:33.206 [2024-10-09 03:22:16.485936] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:33.206 [2024-10-09 03:22:16.485988] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:33.206 [2024-10-09 03:22:16.486057] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:33.206 [2024-10-09 03:22:16.486093] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:33.206 [2024-10-09 03:22:16.486134] bdev_raid.c:3748:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:33.206 [2024-10-09 03:22:16.486201] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:33.206 [2024-10-09 03:22:16.499358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:19:33.206 spare 00:19:33.206 03:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.206 03:22:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:33.206 [2024-10-09 03:22:16.501430] bdev_raid.c:2931:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:34.587 03:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:34.587 03:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:34.587 03:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:34.587 03:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:34.587 03:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:34.587 03:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.587 03:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.587 03:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.587 03:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:34.587 03:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.587 03:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:34.587 "name": "raid_bdev1", 00:19:34.587 "uuid": "95cddd0f-ca7c-444e-b6a6-347de701a40e", 00:19:34.587 "strip_size_kb": 0, 00:19:34.587 "state": "online", 00:19:34.587 "raid_level": "raid1", 00:19:34.587 "superblock": true, 00:19:34.587 "num_base_bdevs": 2, 00:19:34.587 "num_base_bdevs_discovered": 2, 00:19:34.587 "num_base_bdevs_operational": 2, 00:19:34.587 "process": { 00:19:34.587 "type": "rebuild", 00:19:34.587 "target": "spare", 00:19:34.587 "progress": { 00:19:34.587 "blocks": 2560, 00:19:34.587 "percent": 32 00:19:34.587 } 00:19:34.587 }, 00:19:34.587 "base_bdevs_list": [ 00:19:34.587 { 00:19:34.587 "name": "spare", 00:19:34.587 "uuid": "9ec2b5ea-536a-5eec-b68d-4724129df4f7", 00:19:34.587 "is_configured": true, 00:19:34.587 "data_offset": 256, 00:19:34.587 "data_size": 7936 00:19:34.587 }, 00:19:34.587 { 00:19:34.587 "name": "BaseBdev2", 00:19:34.587 "uuid": "8cbefd9f-1575-5a01-9d25-938f9094b547", 00:19:34.587 "is_configured": true, 00:19:34.587 "data_offset": 256, 00:19:34.587 "data_size": 7936 00:19:34.587 } 00:19:34.587 ] 00:19:34.587 }' 00:19:34.587 03:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:34.587 03:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:34.587 03:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:34.587 03:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:34.587 03:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:34.587 03:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.587 03:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:34.587 [2024-10-09 03:22:17.661033] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:34.587 [2024-10-09 03:22:17.708932] bdev_raid.c:2567:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:34.587 [2024-10-09 03:22:17.708984] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:34.587 [2024-10-09 03:22:17.709001] bdev_raid.c:2171:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:34.587 [2024-10-09 03:22:17.709008] bdev_raid.c:2505:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:34.587 03:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.587 03:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:34.587 03:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:34.587 03:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:34.587 03:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:34.587 03:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:34.587 03:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:34.587 03:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:34.587 03:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:34.587 03:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:34.587 03:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:34.587 03:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.587 03:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:34.587 03:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.587 03:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:34.587 03:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.587 03:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:34.587 "name": "raid_bdev1", 00:19:34.587 "uuid": "95cddd0f-ca7c-444e-b6a6-347de701a40e", 00:19:34.587 "strip_size_kb": 0, 00:19:34.587 "state": "online", 00:19:34.587 "raid_level": "raid1", 00:19:34.587 "superblock": true, 00:19:34.587 "num_base_bdevs": 2, 00:19:34.587 "num_base_bdevs_discovered": 1, 00:19:34.587 "num_base_bdevs_operational": 1, 00:19:34.587 "base_bdevs_list": [ 00:19:34.587 { 00:19:34.587 "name": null, 00:19:34.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.587 "is_configured": false, 00:19:34.587 "data_offset": 0, 00:19:34.587 "data_size": 7936 00:19:34.587 }, 00:19:34.587 { 00:19:34.587 "name": "BaseBdev2", 00:19:34.587 "uuid": "8cbefd9f-1575-5a01-9d25-938f9094b547", 00:19:34.587 "is_configured": true, 00:19:34.587 "data_offset": 256, 00:19:34.587 "data_size": 7936 00:19:34.587 } 00:19:34.587 ] 00:19:34.587 }' 00:19:34.587 03:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:34.587 03:22:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:35.157 03:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:35.157 03:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:35.157 03:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:35.157 03:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:35.157 03:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:35.157 03:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.157 03:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:35.157 03:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.157 03:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:35.157 03:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.157 03:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:35.157 "name": "raid_bdev1", 00:19:35.157 "uuid": "95cddd0f-ca7c-444e-b6a6-347de701a40e", 00:19:35.157 "strip_size_kb": 0, 00:19:35.157 "state": "online", 00:19:35.157 "raid_level": "raid1", 00:19:35.157 "superblock": true, 00:19:35.157 "num_base_bdevs": 2, 00:19:35.157 "num_base_bdevs_discovered": 1, 00:19:35.157 "num_base_bdevs_operational": 1, 00:19:35.157 "base_bdevs_list": [ 00:19:35.157 { 00:19:35.157 "name": null, 00:19:35.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.157 "is_configured": false, 00:19:35.157 "data_offset": 0, 00:19:35.157 "data_size": 7936 00:19:35.157 }, 00:19:35.157 { 00:19:35.157 "name": "BaseBdev2", 00:19:35.157 "uuid": "8cbefd9f-1575-5a01-9d25-938f9094b547", 00:19:35.157 "is_configured": true, 00:19:35.157 "data_offset": 256, 00:19:35.157 "data_size": 7936 00:19:35.157 } 00:19:35.157 ] 00:19:35.157 }' 00:19:35.157 03:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:35.157 03:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:35.157 03:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:35.157 03:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:35.157 03:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:35.157 03:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.157 03:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:35.157 03:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.157 03:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:35.157 03:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.157 03:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:35.157 [2024-10-09 03:22:18.319853] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:35.157 [2024-10-09 03:22:18.319952] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:35.157 [2024-10-09 03:22:18.319984] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:19:35.157 [2024-10-09 03:22:18.319994] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:35.157 [2024-10-09 03:22:18.320154] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:35.157 [2024-10-09 03:22:18.320165] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:35.157 [2024-10-09 03:22:18.320210] bdev_raid.c:3897:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:35.157 [2024-10-09 03:22:18.320221] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:35.157 [2024-10-09 03:22:18.320232] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:35.157 [2024-10-09 03:22:18.320242] bdev_raid.c:3884:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:35.157 BaseBdev1 00:19:35.157 03:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.157 03:22:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:36.097 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:36.097 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:36.097 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:36.097 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:36.097 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:36.097 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:36.097 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:36.097 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:36.097 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:36.097 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:36.097 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.097 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.097 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.097 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:36.097 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.097 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:36.097 "name": "raid_bdev1", 00:19:36.097 "uuid": "95cddd0f-ca7c-444e-b6a6-347de701a40e", 00:19:36.097 "strip_size_kb": 0, 00:19:36.097 "state": "online", 00:19:36.097 "raid_level": "raid1", 00:19:36.097 "superblock": true, 00:19:36.097 "num_base_bdevs": 2, 00:19:36.097 "num_base_bdevs_discovered": 1, 00:19:36.097 "num_base_bdevs_operational": 1, 00:19:36.097 "base_bdevs_list": [ 00:19:36.097 { 00:19:36.097 "name": null, 00:19:36.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.097 "is_configured": false, 00:19:36.097 "data_offset": 0, 00:19:36.097 "data_size": 7936 00:19:36.097 }, 00:19:36.097 { 00:19:36.097 "name": "BaseBdev2", 00:19:36.097 "uuid": "8cbefd9f-1575-5a01-9d25-938f9094b547", 00:19:36.097 "is_configured": true, 00:19:36.097 "data_offset": 256, 00:19:36.097 "data_size": 7936 00:19:36.097 } 00:19:36.097 ] 00:19:36.097 }' 00:19:36.097 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:36.097 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:36.666 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:36.666 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:36.666 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:36.666 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:36.666 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:36.666 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:36.666 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.666 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.666 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:36.666 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.666 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:36.666 "name": "raid_bdev1", 00:19:36.666 "uuid": "95cddd0f-ca7c-444e-b6a6-347de701a40e", 00:19:36.666 "strip_size_kb": 0, 00:19:36.666 "state": "online", 00:19:36.666 "raid_level": "raid1", 00:19:36.666 "superblock": true, 00:19:36.666 "num_base_bdevs": 2, 00:19:36.666 "num_base_bdevs_discovered": 1, 00:19:36.666 "num_base_bdevs_operational": 1, 00:19:36.666 "base_bdevs_list": [ 00:19:36.666 { 00:19:36.666 "name": null, 00:19:36.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:36.666 "is_configured": false, 00:19:36.666 "data_offset": 0, 00:19:36.666 "data_size": 7936 00:19:36.666 }, 00:19:36.666 { 00:19:36.666 "name": "BaseBdev2", 00:19:36.666 "uuid": "8cbefd9f-1575-5a01-9d25-938f9094b547", 00:19:36.666 "is_configured": true, 00:19:36.666 "data_offset": 256, 00:19:36.666 "data_size": 7936 00:19:36.666 } 00:19:36.666 ] 00:19:36.666 }' 00:19:36.666 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:36.666 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:36.666 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:36.666 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:36.666 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:36.666 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # local es=0 00:19:36.666 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:36.666 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:19:36.666 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:36.666 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:19:36.666 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:36.666 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:36.666 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.666 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:36.666 [2024-10-09 03:22:19.953090] bdev_raid.c:3322:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:36.666 [2024-10-09 03:22:19.953190] bdev_raid.c:3690:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:36.666 [2024-10-09 03:22:19.953208] bdev_raid.c:3709:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:36.666 request: 00:19:36.666 { 00:19:36.666 "base_bdev": "BaseBdev1", 00:19:36.666 "raid_bdev": "raid_bdev1", 00:19:36.666 "method": "bdev_raid_add_base_bdev", 00:19:36.666 "req_id": 1 00:19:36.666 } 00:19:36.666 Got JSON-RPC error response 00:19:36.666 response: 00:19:36.666 { 00:19:36.666 "code": -22, 00:19:36.666 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:36.666 } 00:19:36.666 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:19:36.666 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@653 -- # es=1 00:19:36.666 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:36.666 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:36.666 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:36.666 03:22:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:38.048 03:22:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:38.048 03:22:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:38.048 03:22:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:38.048 03:22:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:38.048 03:22:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:38.048 03:22:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:38.048 03:22:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:38.048 03:22:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:38.048 03:22:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:38.048 03:22:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:38.048 03:22:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.048 03:22:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.048 03:22:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.048 03:22:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:38.048 03:22:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.048 03:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:38.048 "name": "raid_bdev1", 00:19:38.048 "uuid": "95cddd0f-ca7c-444e-b6a6-347de701a40e", 00:19:38.048 "strip_size_kb": 0, 00:19:38.048 "state": "online", 00:19:38.048 "raid_level": "raid1", 00:19:38.048 "superblock": true, 00:19:38.048 "num_base_bdevs": 2, 00:19:38.048 "num_base_bdevs_discovered": 1, 00:19:38.048 "num_base_bdevs_operational": 1, 00:19:38.048 "base_bdevs_list": [ 00:19:38.048 { 00:19:38.048 "name": null, 00:19:38.048 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.048 "is_configured": false, 00:19:38.048 "data_offset": 0, 00:19:38.048 "data_size": 7936 00:19:38.048 }, 00:19:38.048 { 00:19:38.048 "name": "BaseBdev2", 00:19:38.048 "uuid": "8cbefd9f-1575-5a01-9d25-938f9094b547", 00:19:38.048 "is_configured": true, 00:19:38.048 "data_offset": 256, 00:19:38.048 "data_size": 7936 00:19:38.048 } 00:19:38.048 ] 00:19:38.048 }' 00:19:38.048 03:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:38.048 03:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:38.308 03:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:38.308 03:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:38.308 03:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:38.308 03:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:38.308 03:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:38.308 03:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.308 03:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:38.308 03:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.308 03:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:38.308 03:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.308 03:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:38.308 "name": "raid_bdev1", 00:19:38.308 "uuid": "95cddd0f-ca7c-444e-b6a6-347de701a40e", 00:19:38.308 "strip_size_kb": 0, 00:19:38.308 "state": "online", 00:19:38.308 "raid_level": "raid1", 00:19:38.308 "superblock": true, 00:19:38.308 "num_base_bdevs": 2, 00:19:38.308 "num_base_bdevs_discovered": 1, 00:19:38.308 "num_base_bdevs_operational": 1, 00:19:38.308 "base_bdevs_list": [ 00:19:38.308 { 00:19:38.308 "name": null, 00:19:38.308 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:38.308 "is_configured": false, 00:19:38.308 "data_offset": 0, 00:19:38.308 "data_size": 7936 00:19:38.308 }, 00:19:38.308 { 00:19:38.308 "name": "BaseBdev2", 00:19:38.308 "uuid": "8cbefd9f-1575-5a01-9d25-938f9094b547", 00:19:38.308 "is_configured": true, 00:19:38.308 "data_offset": 256, 00:19:38.308 "data_size": 7936 00:19:38.308 } 00:19:38.308 ] 00:19:38.308 }' 00:19:38.308 03:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:38.308 03:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:38.308 03:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:38.308 03:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:38.308 03:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89330 00:19:38.308 03:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@950 -- # '[' -z 89330 ']' 00:19:38.308 03:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # kill -0 89330 00:19:38.308 03:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # uname 00:19:38.308 03:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:38.308 03:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 89330 00:19:38.308 03:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:38.308 03:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:38.308 03:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@968 -- # echo 'killing process with pid 89330' 00:19:38.308 killing process with pid 89330 00:19:38.308 03:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@969 -- # kill 89330 00:19:38.308 Received shutdown signal, test time was about 60.000000 seconds 00:19:38.308 00:19:38.308 Latency(us) 00:19:38.308 [2024-10-09T03:22:21.611Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:38.308 [2024-10-09T03:22:21.611Z] =================================================================================================================== 00:19:38.308 [2024-10-09T03:22:21.611Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:38.308 [2024-10-09 03:22:21.564409] bdev_raid.c:1383:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:38.308 [2024-10-09 03:22:21.564617] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:38.308 [2024-10-09 03:22:21.564699] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:38.308 [2024-10-09 03:22:21.564712] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:38.308 03:22:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@974 -- # wait 89330 00:19:38.878 [2024-10-09 03:22:21.875264] bdev_raid.c:1409:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:40.260 03:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:19:40.260 00:19:40.260 real 0m17.715s 00:19:40.260 user 0m23.082s 00:19:40.260 sys 0m1.708s 00:19:40.260 03:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:40.260 03:22:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:19:40.260 ************************************ 00:19:40.260 END TEST raid_rebuild_test_sb_md_interleaved 00:19:40.260 ************************************ 00:19:40.260 03:22:23 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:19:40.260 03:22:23 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:19:40.260 03:22:23 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89330 ']' 00:19:40.260 03:22:23 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89330 00:19:40.260 03:22:23 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:19:40.260 00:19:40.260 real 12m19.616s 00:19:40.260 user 16m19.065s 00:19:40.260 sys 2m1.083s 00:19:40.260 03:22:23 bdev_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:40.260 03:22:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:40.260 ************************************ 00:19:40.260 END TEST bdev_raid 00:19:40.260 ************************************ 00:19:40.260 03:22:23 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:40.260 03:22:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:40.260 03:22:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:40.260 03:22:23 -- common/autotest_common.sh@10 -- # set +x 00:19:40.260 ************************************ 00:19:40.260 START TEST spdkcli_raid 00:19:40.260 ************************************ 00:19:40.260 03:22:23 spdkcli_raid -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:40.260 * Looking for test storage... 00:19:40.260 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:40.260 03:22:23 spdkcli_raid -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:40.260 03:22:23 spdkcli_raid -- common/autotest_common.sh@1681 -- # lcov --version 00:19:40.260 03:22:23 spdkcli_raid -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:40.260 03:22:23 spdkcli_raid -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:40.260 03:22:23 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:40.260 03:22:23 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:40.260 03:22:23 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:40.260 03:22:23 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:19:40.260 03:22:23 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:19:40.260 03:22:23 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:19:40.260 03:22:23 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:19:40.260 03:22:23 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:19:40.260 03:22:23 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:19:40.260 03:22:23 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:19:40.260 03:22:23 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:40.260 03:22:23 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:19:40.260 03:22:23 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:19:40.260 03:22:23 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:40.260 03:22:23 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:40.260 03:22:23 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:19:40.260 03:22:23 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:19:40.260 03:22:23 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:40.260 03:22:23 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:19:40.260 03:22:23 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:19:40.260 03:22:23 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:19:40.260 03:22:23 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:19:40.260 03:22:23 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:40.260 03:22:23 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:19:40.260 03:22:23 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:19:40.260 03:22:23 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:40.260 03:22:23 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:40.260 03:22:23 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:19:40.260 03:22:23 spdkcli_raid -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:40.260 03:22:23 spdkcli_raid -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:40.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.260 --rc genhtml_branch_coverage=1 00:19:40.260 --rc genhtml_function_coverage=1 00:19:40.260 --rc genhtml_legend=1 00:19:40.260 --rc geninfo_all_blocks=1 00:19:40.260 --rc geninfo_unexecuted_blocks=1 00:19:40.260 00:19:40.260 ' 00:19:40.260 03:22:23 spdkcli_raid -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:40.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.260 --rc genhtml_branch_coverage=1 00:19:40.260 --rc genhtml_function_coverage=1 00:19:40.260 --rc genhtml_legend=1 00:19:40.260 --rc geninfo_all_blocks=1 00:19:40.260 --rc geninfo_unexecuted_blocks=1 00:19:40.260 00:19:40.260 ' 00:19:40.260 03:22:23 spdkcli_raid -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:40.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.261 --rc genhtml_branch_coverage=1 00:19:40.261 --rc genhtml_function_coverage=1 00:19:40.261 --rc genhtml_legend=1 00:19:40.261 --rc geninfo_all_blocks=1 00:19:40.261 --rc geninfo_unexecuted_blocks=1 00:19:40.261 00:19:40.261 ' 00:19:40.261 03:22:23 spdkcli_raid -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:40.261 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:40.261 --rc genhtml_branch_coverage=1 00:19:40.261 --rc genhtml_function_coverage=1 00:19:40.261 --rc genhtml_legend=1 00:19:40.261 --rc geninfo_all_blocks=1 00:19:40.261 --rc geninfo_unexecuted_blocks=1 00:19:40.261 00:19:40.261 ' 00:19:40.261 03:22:23 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:40.261 03:22:23 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:40.261 03:22:23 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:40.261 03:22:23 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:19:40.261 03:22:23 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:19:40.261 03:22:23 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:19:40.261 03:22:23 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:19:40.261 03:22:23 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:19:40.261 03:22:23 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:19:40.261 03:22:23 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:19:40.261 03:22:23 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:19:40.261 03:22:23 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:19:40.261 03:22:23 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:19:40.261 03:22:23 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:19:40.261 03:22:23 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:19:40.261 03:22:23 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:19:40.261 03:22:23 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:19:40.261 03:22:23 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:19:40.261 03:22:23 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:19:40.261 03:22:23 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:19:40.261 03:22:23 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:19:40.261 03:22:23 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:19:40.261 03:22:23 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:19:40.261 03:22:23 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:19:40.261 03:22:23 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:19:40.261 03:22:23 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:19:40.521 03:22:23 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:40.521 03:22:23 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:19:40.521 03:22:23 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:19:40.521 03:22:23 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:19:40.521 03:22:23 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:19:40.521 03:22:23 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:19:40.521 03:22:23 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:19:40.521 03:22:23 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:40.521 03:22:23 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:40.521 03:22:23 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:19:40.521 03:22:23 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90013 00:19:40.521 03:22:23 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:19:40.521 03:22:23 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90013 00:19:40.521 03:22:23 spdkcli_raid -- common/autotest_common.sh@831 -- # '[' -z 90013 ']' 00:19:40.521 03:22:23 spdkcli_raid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.521 03:22:23 spdkcli_raid -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:40.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.521 03:22:23 spdkcli_raid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.521 03:22:23 spdkcli_raid -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:40.521 03:22:23 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:40.521 [2024-10-09 03:22:23.680176] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:19:40.521 [2024-10-09 03:22:23.680310] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90013 ] 00:19:40.781 [2024-10-09 03:22:23.848014] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:41.043 [2024-10-09 03:22:24.095298] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:41.043 [2024-10-09 03:22:24.095329] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.002 03:22:24 spdkcli_raid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:42.002 03:22:24 spdkcli_raid -- common/autotest_common.sh@864 -- # return 0 00:19:42.002 03:22:24 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:19:42.002 03:22:24 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:42.002 03:22:24 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:42.002 03:22:25 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:19:42.002 03:22:25 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:42.002 03:22:25 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:42.002 03:22:25 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:19:42.002 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:19:42.002 ' 00:19:43.382 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:19:43.382 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:19:43.382 03:22:26 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:19:43.382 03:22:26 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:43.382 03:22:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:43.641 03:22:26 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:19:43.641 03:22:26 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:43.641 03:22:26 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:43.641 03:22:26 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:19:43.641 ' 00:19:44.580 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:19:44.580 03:22:27 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:19:44.580 03:22:27 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:44.580 03:22:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:44.839 03:22:27 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:19:44.839 03:22:27 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:44.839 03:22:27 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:44.839 03:22:27 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:19:44.839 03:22:27 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:19:45.407 03:22:28 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:19:45.407 03:22:28 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:19:45.408 03:22:28 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:19:45.408 03:22:28 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:45.408 03:22:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:45.408 03:22:28 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:19:45.408 03:22:28 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:45.408 03:22:28 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:45.408 03:22:28 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:19:45.408 ' 00:19:46.346 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:19:46.346 03:22:29 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:19:46.346 03:22:29 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:46.346 03:22:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:46.346 03:22:29 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:19:46.346 03:22:29 spdkcli_raid -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:46.346 03:22:29 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:46.346 03:22:29 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:19:46.346 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:19:46.346 ' 00:19:47.726 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:19:47.726 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:19:47.985 03:22:31 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:19:47.985 03:22:31 spdkcli_raid -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:47.985 03:22:31 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:47.985 03:22:31 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90013 00:19:47.985 03:22:31 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 90013 ']' 00:19:47.985 03:22:31 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 90013 00:19:47.985 03:22:31 spdkcli_raid -- common/autotest_common.sh@955 -- # uname 00:19:47.985 03:22:31 spdkcli_raid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:47.985 03:22:31 spdkcli_raid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90013 00:19:47.985 03:22:31 spdkcli_raid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:47.985 03:22:31 spdkcli_raid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:47.985 killing process with pid 90013 00:19:47.985 03:22:31 spdkcli_raid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90013' 00:19:47.985 03:22:31 spdkcli_raid -- common/autotest_common.sh@969 -- # kill 90013 00:19:47.985 03:22:31 spdkcli_raid -- common/autotest_common.sh@974 -- # wait 90013 00:19:50.525 03:22:33 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:19:50.526 03:22:33 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90013 ']' 00:19:50.526 03:22:33 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90013 00:19:50.526 03:22:33 spdkcli_raid -- common/autotest_common.sh@950 -- # '[' -z 90013 ']' 00:19:50.526 03:22:33 spdkcli_raid -- common/autotest_common.sh@954 -- # kill -0 90013 00:19:50.526 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 954: kill: (90013) - No such process 00:19:50.526 Process with pid 90013 is not found 00:19:50.526 03:22:33 spdkcli_raid -- common/autotest_common.sh@977 -- # echo 'Process with pid 90013 is not found' 00:19:50.526 03:22:33 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:19:50.526 03:22:33 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:19:50.526 03:22:33 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:19:50.526 03:22:33 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:19:50.526 00:19:50.526 real 0m10.472s 00:19:50.526 user 0m20.962s 00:19:50.526 sys 0m1.303s 00:19:50.526 03:22:33 spdkcli_raid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:50.526 03:22:33 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:19:50.526 ************************************ 00:19:50.526 END TEST spdkcli_raid 00:19:50.526 ************************************ 00:19:50.786 03:22:33 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:50.786 03:22:33 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:50.786 03:22:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:50.786 03:22:33 -- common/autotest_common.sh@10 -- # set +x 00:19:50.786 ************************************ 00:19:50.786 START TEST blockdev_raid5f 00:19:50.786 ************************************ 00:19:50.786 03:22:33 blockdev_raid5f -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:19:50.786 * Looking for test storage... 00:19:50.786 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:19:50.786 03:22:33 blockdev_raid5f -- common/autotest_common.sh@1680 -- # [[ y == y ]] 00:19:50.786 03:22:33 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lcov --version 00:19:50.786 03:22:33 blockdev_raid5f -- common/autotest_common.sh@1681 -- # awk '{print $NF}' 00:19:50.786 03:22:34 blockdev_raid5f -- common/autotest_common.sh@1681 -- # lt 1.15 2 00:19:50.786 03:22:34 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:50.786 03:22:34 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:50.786 03:22:34 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:50.786 03:22:34 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:19:50.786 03:22:34 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:19:50.786 03:22:34 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:19:50.786 03:22:34 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:19:50.786 03:22:34 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:19:50.786 03:22:34 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:19:50.786 03:22:34 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:19:50.786 03:22:34 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:50.786 03:22:34 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:19:50.786 03:22:34 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:19:50.786 03:22:34 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:50.786 03:22:34 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:50.786 03:22:34 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:19:50.786 03:22:34 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:19:50.786 03:22:34 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:50.786 03:22:34 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:19:50.786 03:22:34 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:19:50.786 03:22:34 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:19:50.786 03:22:34 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:19:50.786 03:22:34 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:50.786 03:22:34 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:19:50.786 03:22:34 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:19:50.786 03:22:34 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:50.786 03:22:34 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:50.786 03:22:34 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:19:50.786 03:22:34 blockdev_raid5f -- common/autotest_common.sh@1682 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:50.786 03:22:34 blockdev_raid5f -- common/autotest_common.sh@1694 -- # export 'LCOV_OPTS= 00:19:50.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.786 --rc genhtml_branch_coverage=1 00:19:50.786 --rc genhtml_function_coverage=1 00:19:50.786 --rc genhtml_legend=1 00:19:50.786 --rc geninfo_all_blocks=1 00:19:50.786 --rc geninfo_unexecuted_blocks=1 00:19:50.786 00:19:50.786 ' 00:19:50.786 03:22:34 blockdev_raid5f -- common/autotest_common.sh@1694 -- # LCOV_OPTS=' 00:19:50.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.786 --rc genhtml_branch_coverage=1 00:19:50.786 --rc genhtml_function_coverage=1 00:19:50.786 --rc genhtml_legend=1 00:19:50.786 --rc geninfo_all_blocks=1 00:19:50.786 --rc geninfo_unexecuted_blocks=1 00:19:50.786 00:19:50.786 ' 00:19:50.786 03:22:34 blockdev_raid5f -- common/autotest_common.sh@1695 -- # export 'LCOV=lcov 00:19:50.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.786 --rc genhtml_branch_coverage=1 00:19:50.786 --rc genhtml_function_coverage=1 00:19:50.786 --rc genhtml_legend=1 00:19:50.786 --rc geninfo_all_blocks=1 00:19:50.786 --rc geninfo_unexecuted_blocks=1 00:19:50.786 00:19:50.786 ' 00:19:50.786 03:22:34 blockdev_raid5f -- common/autotest_common.sh@1695 -- # LCOV='lcov 00:19:50.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:50.786 --rc genhtml_branch_coverage=1 00:19:50.786 --rc genhtml_function_coverage=1 00:19:50.786 --rc genhtml_legend=1 00:19:50.786 --rc geninfo_all_blocks=1 00:19:50.786 --rc geninfo_unexecuted_blocks=1 00:19:50.786 00:19:50.786 ' 00:19:50.786 03:22:34 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:19:50.786 03:22:34 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:19:51.047 03:22:34 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:19:51.047 03:22:34 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:19:51.047 03:22:34 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:19:51.047 03:22:34 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:19:51.047 03:22:34 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:19:51.047 03:22:34 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:19:51.047 03:22:34 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:19:51.047 03:22:34 blockdev_raid5f -- bdev/blockdev.sh@669 -- # QOS_DEV_1=Malloc_0 00:19:51.047 03:22:34 blockdev_raid5f -- bdev/blockdev.sh@670 -- # QOS_DEV_2=Null_1 00:19:51.047 03:22:34 blockdev_raid5f -- bdev/blockdev.sh@671 -- # QOS_RUN_TIME=5 00:19:51.047 03:22:34 blockdev_raid5f -- bdev/blockdev.sh@673 -- # uname -s 00:19:51.047 03:22:34 blockdev_raid5f -- bdev/blockdev.sh@673 -- # '[' Linux = Linux ']' 00:19:51.047 03:22:34 blockdev_raid5f -- bdev/blockdev.sh@675 -- # PRE_RESERVED_MEM=0 00:19:51.047 03:22:34 blockdev_raid5f -- bdev/blockdev.sh@681 -- # test_type=raid5f 00:19:51.047 03:22:34 blockdev_raid5f -- bdev/blockdev.sh@682 -- # crypto_device= 00:19:51.047 03:22:34 blockdev_raid5f -- bdev/blockdev.sh@683 -- # dek= 00:19:51.047 03:22:34 blockdev_raid5f -- bdev/blockdev.sh@684 -- # env_ctx= 00:19:51.047 03:22:34 blockdev_raid5f -- bdev/blockdev.sh@685 -- # wait_for_rpc= 00:19:51.047 03:22:34 blockdev_raid5f -- bdev/blockdev.sh@686 -- # '[' -n '' ']' 00:19:51.047 03:22:34 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == bdev ]] 00:19:51.047 03:22:34 blockdev_raid5f -- bdev/blockdev.sh@689 -- # [[ raid5f == crypto_* ]] 00:19:51.047 03:22:34 blockdev_raid5f -- bdev/blockdev.sh@692 -- # start_spdk_tgt 00:19:51.047 03:22:34 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90293 00:19:51.047 03:22:34 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:51.047 03:22:34 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:19:51.047 03:22:34 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90293 00:19:51.047 03:22:34 blockdev_raid5f -- common/autotest_common.sh@831 -- # '[' -z 90293 ']' 00:19:51.047 03:22:34 blockdev_raid5f -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:51.047 03:22:34 blockdev_raid5f -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:51.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:51.047 03:22:34 blockdev_raid5f -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:51.047 03:22:34 blockdev_raid5f -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:51.047 03:22:34 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:51.047 [2024-10-09 03:22:34.204327] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:19:51.047 [2024-10-09 03:22:34.204904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90293 ] 00:19:51.307 [2024-10-09 03:22:34.368755] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:51.307 [2024-10-09 03:22:34.604826] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.247 03:22:35 blockdev_raid5f -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:52.247 03:22:35 blockdev_raid5f -- common/autotest_common.sh@864 -- # return 0 00:19:52.247 03:22:35 blockdev_raid5f -- bdev/blockdev.sh@693 -- # case "$test_type" in 00:19:52.247 03:22:35 blockdev_raid5f -- bdev/blockdev.sh@725 -- # setup_raid5f_conf 00:19:52.247 03:22:35 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:19:52.247 03:22:35 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.247 03:22:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:52.247 Malloc0 00:19:52.507 Malloc1 00:19:52.507 Malloc2 00:19:52.507 03:22:35 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.507 03:22:35 blockdev_raid5f -- bdev/blockdev.sh@736 -- # rpc_cmd bdev_wait_for_examine 00:19:52.507 03:22:35 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.507 03:22:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:52.507 03:22:35 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.507 03:22:35 blockdev_raid5f -- bdev/blockdev.sh@739 -- # cat 00:19:52.507 03:22:35 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n accel 00:19:52.507 03:22:35 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.507 03:22:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:52.507 03:22:35 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.507 03:22:35 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n bdev 00:19:52.507 03:22:35 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.507 03:22:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:52.507 03:22:35 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.507 03:22:35 blockdev_raid5f -- bdev/blockdev.sh@739 -- # rpc_cmd save_subsystem_config -n iobuf 00:19:52.507 03:22:35 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.507 03:22:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:52.507 03:22:35 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.507 03:22:35 blockdev_raid5f -- bdev/blockdev.sh@747 -- # mapfile -t bdevs 00:19:52.507 03:22:35 blockdev_raid5f -- bdev/blockdev.sh@747 -- # rpc_cmd bdev_get_bdevs 00:19:52.507 03:22:35 blockdev_raid5f -- bdev/blockdev.sh@747 -- # jq -r '.[] | select(.claimed == false)' 00:19:52.507 03:22:35 blockdev_raid5f -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.507 03:22:35 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:52.507 03:22:35 blockdev_raid5f -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.507 03:22:35 blockdev_raid5f -- bdev/blockdev.sh@748 -- # mapfile -t bdevs_name 00:19:52.507 03:22:35 blockdev_raid5f -- bdev/blockdev.sh@748 -- # jq -r .name 00:19:52.507 03:22:35 blockdev_raid5f -- bdev/blockdev.sh@748 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "2c39a741-0b02-41d4-8b5a-ee67c94ce337"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "2c39a741-0b02-41d4-8b5a-ee67c94ce337",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "2c39a741-0b02-41d4-8b5a-ee67c94ce337",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "848733f7-ea55-4698-8050-d693511e0668",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "bd8482ba-f31e-4dc8-937b-95bf5f2a9aea",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "ecb2f7a6-ce0f-476f-aed8-c8a0f6ab03a1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:19:52.507 03:22:35 blockdev_raid5f -- bdev/blockdev.sh@749 -- # bdev_list=("${bdevs_name[@]}") 00:19:52.507 03:22:35 blockdev_raid5f -- bdev/blockdev.sh@751 -- # hello_world_bdev=raid5f 00:19:52.507 03:22:35 blockdev_raid5f -- bdev/blockdev.sh@752 -- # trap - SIGINT SIGTERM EXIT 00:19:52.507 03:22:35 blockdev_raid5f -- bdev/blockdev.sh@753 -- # killprocess 90293 00:19:52.507 03:22:35 blockdev_raid5f -- common/autotest_common.sh@950 -- # '[' -z 90293 ']' 00:19:52.507 03:22:35 blockdev_raid5f -- common/autotest_common.sh@954 -- # kill -0 90293 00:19:52.507 03:22:35 blockdev_raid5f -- common/autotest_common.sh@955 -- # uname 00:19:52.507 03:22:35 blockdev_raid5f -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:52.507 03:22:35 blockdev_raid5f -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90293 00:19:52.767 03:22:35 blockdev_raid5f -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:52.767 03:22:35 blockdev_raid5f -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:52.767 killing process with pid 90293 00:19:52.767 03:22:35 blockdev_raid5f -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90293' 00:19:52.767 03:22:35 blockdev_raid5f -- common/autotest_common.sh@969 -- # kill 90293 00:19:52.767 03:22:35 blockdev_raid5f -- common/autotest_common.sh@974 -- # wait 90293 00:19:56.061 03:22:38 blockdev_raid5f -- bdev/blockdev.sh@757 -- # trap cleanup SIGINT SIGTERM EXIT 00:19:56.061 03:22:38 blockdev_raid5f -- bdev/blockdev.sh@759 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:56.061 03:22:38 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 7 -le 1 ']' 00:19:56.061 03:22:38 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:56.061 03:22:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:56.061 ************************************ 00:19:56.061 START TEST bdev_hello_world 00:19:56.061 ************************************ 00:19:56.061 03:22:38 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:19:56.061 [2024-10-09 03:22:38.804009] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:19:56.061 [2024-10-09 03:22:38.804126] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90366 ] 00:19:56.061 [2024-10-09 03:22:38.967208] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:56.061 [2024-10-09 03:22:39.216444] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:56.630 [2024-10-09 03:22:39.833980] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:19:56.630 [2024-10-09 03:22:39.834037] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:19:56.630 [2024-10-09 03:22:39.834055] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:19:56.630 [2024-10-09 03:22:39.834556] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:19:56.630 [2024-10-09 03:22:39.834721] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:19:56.630 [2024-10-09 03:22:39.834752] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:19:56.630 [2024-10-09 03:22:39.834802] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:19:56.630 00:19:56.631 [2024-10-09 03:22:39.834821] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:19:58.541 00:19:58.541 real 0m2.700s 00:19:58.541 user 0m2.223s 00:19:58.541 sys 0m0.356s 00:19:58.541 03:22:41 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:58.541 03:22:41 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:19:58.541 ************************************ 00:19:58.541 END TEST bdev_hello_world 00:19:58.541 ************************************ 00:19:58.541 03:22:41 blockdev_raid5f -- bdev/blockdev.sh@760 -- # run_test bdev_bounds bdev_bounds '' 00:19:58.541 03:22:41 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:58.541 03:22:41 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:58.541 03:22:41 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:19:58.541 ************************************ 00:19:58.541 START TEST bdev_bounds 00:19:58.541 ************************************ 00:19:58.541 03:22:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1125 -- # bdev_bounds '' 00:19:58.541 03:22:41 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90418 00:19:58.541 03:22:41 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:19:58.541 03:22:41 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:19:58.541 Process bdevio pid: 90418 00:19:58.541 03:22:41 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90418' 00:19:58.541 03:22:41 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90418 00:19:58.541 03:22:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@831 -- # '[' -z 90418 ']' 00:19:58.541 03:22:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:58.541 03:22:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:58.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:58.541 03:22:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:58.541 03:22:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:58.541 03:22:41 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:19:58.541 [2024-10-09 03:22:41.599417] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:19:58.541 [2024-10-09 03:22:41.599623] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90418 ] 00:19:58.541 [2024-10-09 03:22:41.771358] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:58.801 [2024-10-09 03:22:42.008117] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.801 [2024-10-09 03:22:42.008355] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 2 00:19:58.801 [2024-10-09 03:22:42.008372] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:19:59.372 03:22:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:59.372 03:22:42 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # return 0 00:19:59.372 03:22:42 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:19:59.631 I/O targets: 00:19:59.631 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:19:59.631 00:19:59.631 00:19:59.631 CUnit - A unit testing framework for C - Version 2.1-3 00:19:59.631 http://cunit.sourceforge.net/ 00:19:59.631 00:19:59.631 00:19:59.631 Suite: bdevio tests on: raid5f 00:19:59.631 Test: blockdev write read block ...passed 00:19:59.631 Test: blockdev write zeroes read block ...passed 00:19:59.631 Test: blockdev write zeroes read no split ...passed 00:19:59.631 Test: blockdev write zeroes read split ...passed 00:19:59.902 Test: blockdev write zeroes read split partial ...passed 00:19:59.902 Test: blockdev reset ...passed 00:19:59.902 Test: blockdev write read 8 blocks ...passed 00:19:59.902 Test: blockdev write read size > 128k ...passed 00:19:59.902 Test: blockdev write read invalid size ...passed 00:19:59.902 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:59.902 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:59.902 Test: blockdev write read max offset ...passed 00:19:59.902 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:59.902 Test: blockdev writev readv 8 blocks ...passed 00:19:59.902 Test: blockdev writev readv 30 x 1block ...passed 00:19:59.902 Test: blockdev writev readv block ...passed 00:19:59.902 Test: blockdev writev readv size > 128k ...passed 00:19:59.902 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:59.902 Test: blockdev comparev and writev ...passed 00:19:59.902 Test: blockdev nvme passthru rw ...passed 00:19:59.902 Test: blockdev nvme passthru vendor specific ...passed 00:19:59.902 Test: blockdev nvme admin passthru ...passed 00:19:59.902 Test: blockdev copy ...passed 00:19:59.902 00:19:59.902 Run Summary: Type Total Ran Passed Failed Inactive 00:19:59.902 suites 1 1 n/a 0 0 00:19:59.902 tests 23 23 23 0 0 00:19:59.902 asserts 130 130 130 0 n/a 00:19:59.902 00:19:59.902 Elapsed time = 0.598 seconds 00:19:59.902 0 00:19:59.902 03:22:43 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90418 00:19:59.902 03:22:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@950 -- # '[' -z 90418 ']' 00:19:59.902 03:22:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # kill -0 90418 00:19:59.902 03:22:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # uname 00:19:59.902 03:22:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:59.902 03:22:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90418 00:19:59.902 03:22:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:59.902 03:22:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:59.902 killing process with pid 90418 00:19:59.902 03:22:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90418' 00:19:59.902 03:22:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@969 -- # kill 90418 00:19:59.902 03:22:43 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@974 -- # wait 90418 00:20:01.827 03:22:44 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:20:01.827 00:20:01.827 real 0m3.159s 00:20:01.827 user 0m7.325s 00:20:01.827 sys 0m0.513s 00:20:01.827 03:22:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:01.827 03:22:44 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:20:01.827 ************************************ 00:20:01.827 END TEST bdev_bounds 00:20:01.827 ************************************ 00:20:01.827 03:22:44 blockdev_raid5f -- bdev/blockdev.sh@761 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:20:01.827 03:22:44 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:20:01.827 03:22:44 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:01.827 03:22:44 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:01.827 ************************************ 00:20:01.827 START TEST bdev_nbd 00:20:01.827 ************************************ 00:20:01.827 03:22:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1125 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:20:01.827 03:22:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:20:01.827 03:22:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:20:01.827 03:22:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:01.828 03:22:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:01.828 03:22:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:20:01.828 03:22:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:20:01.828 03:22:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:20:01.828 03:22:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:20:01.828 03:22:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:20:01.828 03:22:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:20:01.828 03:22:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:20:01.828 03:22:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:20:01.828 03:22:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:20:01.828 03:22:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:20:01.828 03:22:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:20:01.828 03:22:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90483 00:20:01.828 03:22:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:20:01.828 03:22:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:20:01.828 03:22:44 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90483 /var/tmp/spdk-nbd.sock 00:20:01.828 03:22:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@831 -- # '[' -z 90483 ']' 00:20:01.828 03:22:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:20:01.828 03:22:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:01.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:20:01.828 03:22:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:20:01.828 03:22:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:01.828 03:22:44 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:01.828 [2024-10-09 03:22:44.834178] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:20:01.828 [2024-10-09 03:22:44.834285] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:01.828 [2024-10-09 03:22:44.997789] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:02.088 [2024-10-09 03:22:45.248913] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:02.658 03:22:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:02.658 03:22:45 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # return 0 00:20:02.658 03:22:45 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:20:02.658 03:22:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:02.658 03:22:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:20:02.658 03:22:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:20:02.658 03:22:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:20:02.658 03:22:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:02.658 03:22:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:20:02.658 03:22:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:20:02.658 03:22:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:20:02.658 03:22:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:20:02.658 03:22:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:20:02.658 03:22:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:20:02.658 03:22:45 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:20:02.917 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:20:02.917 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:20:02.917 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:20:02.917 03:22:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:02.917 03:22:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:20:02.917 03:22:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:02.917 03:22:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:02.917 03:22:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:02.918 03:22:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:20:02.918 03:22:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:02.918 03:22:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:02.918 03:22:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:02.918 1+0 records in 00:20:02.918 1+0 records out 00:20:02.918 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238898 s, 17.1 MB/s 00:20:02.918 03:22:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:02.918 03:22:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:20:02.918 03:22:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:02.918 03:22:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:02.918 03:22:46 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:20:02.918 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:20:02.918 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:20:02.918 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:03.178 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:20:03.178 { 00:20:03.178 "nbd_device": "/dev/nbd0", 00:20:03.178 "bdev_name": "raid5f" 00:20:03.178 } 00:20:03.178 ]' 00:20:03.178 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:20:03.178 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:20:03.178 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:20:03.178 { 00:20:03.178 "nbd_device": "/dev/nbd0", 00:20:03.178 "bdev_name": "raid5f" 00:20:03.178 } 00:20:03.178 ]' 00:20:03.178 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:03.178 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:03.178 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:03.178 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:03.178 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:03.178 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:03.178 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:03.437 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:03.437 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:03.437 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:03.437 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:03.438 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:03.438 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:03.438 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:03.438 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:03.438 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:03.438 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:03.438 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:03.698 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:03.698 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:03.698 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:03.698 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:03.698 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:03.698 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:03.698 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:03.698 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:03.698 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:03.698 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:20:03.698 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:20:03.698 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:20:03.698 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:20:03.698 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:03.698 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:20:03.698 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:20:03.698 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:20:03.698 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:20:03.698 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:20:03.698 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:03.698 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:20:03.698 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:03.698 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:03.698 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:03.698 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:20:03.698 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:03.698 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:03.698 03:22:46 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:20:03.958 /dev/nbd0 00:20:03.958 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:03.958 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:03.958 03:22:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:20:03.958 03:22:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@869 -- # local i 00:20:03.958 03:22:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:20:03.958 03:22:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:20:03.958 03:22:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:20:03.958 03:22:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # break 00:20:03.958 03:22:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:20:03.958 03:22:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:20:03.958 03:22:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:03.958 1+0 records in 00:20:03.958 1+0 records out 00:20:03.958 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000450788 s, 9.1 MB/s 00:20:03.958 03:22:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:03.958 03:22:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@886 -- # size=4096 00:20:03.958 03:22:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@887 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:03.958 03:22:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:20:03.958 03:22:47 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # return 0 00:20:03.958 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:03.958 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:03.958 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:03.958 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:03.958 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:04.218 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:20:04.218 { 00:20:04.218 "nbd_device": "/dev/nbd0", 00:20:04.218 "bdev_name": "raid5f" 00:20:04.218 } 00:20:04.218 ]' 00:20:04.218 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:20:04.218 { 00:20:04.218 "nbd_device": "/dev/nbd0", 00:20:04.218 "bdev_name": "raid5f" 00:20:04.218 } 00:20:04.218 ]' 00:20:04.218 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:04.218 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:20:04.218 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:04.218 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:20:04.218 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:20:04.218 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:20:04.218 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:20:04.218 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:20:04.218 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:20:04.218 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:20:04.218 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:04.218 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:20:04.218 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:04.218 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:20:04.218 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:20:04.218 256+0 records in 00:20:04.218 256+0 records out 00:20:04.218 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122227 s, 85.8 MB/s 00:20:04.218 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:20:04.218 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:20:04.218 256+0 records in 00:20:04.218 256+0 records out 00:20:04.218 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0305675 s, 34.3 MB/s 00:20:04.218 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:20:04.218 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:20:04.218 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:20:04.218 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:20:04.218 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:04.218 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:20:04.218 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:20:04.218 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:20:04.218 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:20:04.218 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:20:04.218 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:04.218 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:04.218 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:04.218 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:04.219 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:04.219 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:04.219 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:04.478 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:04.478 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:04.478 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:04.478 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:04.478 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:04.478 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:04.478 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:04.478 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:04.478 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:20:04.478 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:04.478 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:20:04.739 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:20:04.739 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:20:04.739 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:20:04.739 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:20:04.739 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:20:04.739 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:20:04.739 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:20:04.739 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:20:04.739 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:20:04.739 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:20:04.739 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:20:04.739 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:20:04.739 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:04.739 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:04.739 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:20:04.739 03:22:47 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:20:04.999 malloc_lvol_verify 00:20:04.999 03:22:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:20:05.258 e8aaaeea-3cc0-44c1-ad6d-78783cbc9d79 00:20:05.258 03:22:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:20:05.258 526377e5-36fd-4379-a4a4-cd477240d76c 00:20:05.258 03:22:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:20:05.518 /dev/nbd0 00:20:05.518 03:22:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:20:05.518 03:22:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:20:05.518 03:22:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:20:05.518 03:22:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:20:05.518 03:22:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:20:05.518 mke2fs 1.47.0 (5-Feb-2023) 00:20:05.518 Discarding device blocks: 0/4096 done 00:20:05.518 Creating filesystem with 4096 1k blocks and 1024 inodes 00:20:05.518 00:20:05.519 Allocating group tables: 0/1 done 00:20:05.519 Writing inode tables: 0/1 done 00:20:05.519 Creating journal (1024 blocks): done 00:20:05.519 Writing superblocks and filesystem accounting information: 0/1 done 00:20:05.519 00:20:05.519 03:22:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:20:05.519 03:22:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:20:05.519 03:22:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:05.519 03:22:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:05.519 03:22:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:20:05.519 03:22:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:05.519 03:22:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:20:05.779 03:22:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:05.779 03:22:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:05.779 03:22:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:05.779 03:22:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:05.779 03:22:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:05.779 03:22:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:05.779 03:22:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:20:05.779 03:22:48 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:20:05.779 03:22:48 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90483 00:20:05.779 03:22:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@950 -- # '[' -z 90483 ']' 00:20:05.779 03:22:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # kill -0 90483 00:20:05.779 03:22:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # uname 00:20:05.779 03:22:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:05.779 03:22:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 90483 00:20:05.779 03:22:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:05.779 03:22:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:05.779 killing process with pid 90483 00:20:05.779 03:22:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@968 -- # echo 'killing process with pid 90483' 00:20:05.779 03:22:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@969 -- # kill 90483 00:20:05.779 03:22:48 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@974 -- # wait 90483 00:20:07.690 03:22:50 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:20:07.690 00:20:07.690 real 0m5.925s 00:20:07.690 user 0m7.705s 00:20:07.690 sys 0m1.361s 00:20:07.690 03:22:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:07.690 03:22:50 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:20:07.690 ************************************ 00:20:07.690 END TEST bdev_nbd 00:20:07.690 ************************************ 00:20:07.690 03:22:50 blockdev_raid5f -- bdev/blockdev.sh@762 -- # [[ y == y ]] 00:20:07.690 03:22:50 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = nvme ']' 00:20:07.690 03:22:50 blockdev_raid5f -- bdev/blockdev.sh@763 -- # '[' raid5f = gpt ']' 00:20:07.690 03:22:50 blockdev_raid5f -- bdev/blockdev.sh@767 -- # run_test bdev_fio fio_test_suite '' 00:20:07.690 03:22:50 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:07.690 03:22:50 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:07.690 03:22:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:07.690 ************************************ 00:20:07.690 START TEST bdev_fio 00:20:07.690 ************************************ 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1125 -- # fio_test_suite '' 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:20:07.690 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1101 -- # '[' 11 -le 1 ']' 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:07.690 ************************************ 00:20:07.690 START TEST bdev_fio_rw_verify 00:20:07.690 ************************************ 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1125 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib=/usr/lib64/libasan.so.8 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # break 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:20:07.690 03:22:50 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:20:07.950 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:20:07.950 fio-3.35 00:20:07.950 Starting 1 thread 00:20:20.171 00:20:20.171 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90684: Wed Oct 9 03:23:02 2024 00:20:20.171 read: IOPS=12.1k, BW=47.4MiB/s (49.7MB/s)(474MiB/10001msec) 00:20:20.171 slat (nsec): min=16734, max=66406, avg=19003.66, stdev=2259.42 00:20:20.171 clat (usec): min=12, max=307, avg=132.31, stdev=45.23 00:20:20.171 lat (usec): min=31, max=325, avg=151.31, stdev=45.51 00:20:20.171 clat percentiles (usec): 00:20:20.171 | 50.000th=[ 133], 99.000th=[ 219], 99.900th=[ 245], 99.990th=[ 269], 00:20:20.171 | 99.999th=[ 289] 00:20:20.171 write: IOPS=12.8k, BW=49.8MiB/s (52.2MB/s)(492MiB/9875msec); 0 zone resets 00:20:20.171 slat (usec): min=7, max=237, avg=16.49, stdev= 3.86 00:20:20.171 clat (usec): min=62, max=1714, avg=305.44, stdev=42.56 00:20:20.171 lat (usec): min=77, max=1951, avg=321.92, stdev=43.64 00:20:20.171 clat percentiles (usec): 00:20:20.171 | 50.000th=[ 306], 99.000th=[ 392], 99.900th=[ 578], 99.990th=[ 1057], 00:20:20.171 | 99.999th=[ 1598] 00:20:20.171 bw ( KiB/s): min=47400, max=53784, per=98.82%, avg=50421.89, stdev=1545.89, samples=19 00:20:20.171 iops : min=11850, max=13446, avg=12605.47, stdev=386.47, samples=19 00:20:20.171 lat (usec) : 20=0.01%, 50=0.01%, 100=14.38%, 250=39.45%, 500=46.10% 00:20:20.171 lat (usec) : 750=0.04%, 1000=0.02% 00:20:20.171 lat (msec) : 2=0.01% 00:20:20.171 cpu : usr=98.74%, sys=0.55%, ctx=20, majf=0, minf=9985 00:20:20.171 IO depths : 1=7.6%, 2=19.8%, 4=55.2%, 8=17.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:20:20.171 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.171 complete : 0=0.0%, 4=90.0%, 8=10.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:20.171 issued rwts: total=121355,125961,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:20.171 latency : target=0, window=0, percentile=100.00%, depth=8 00:20:20.171 00:20:20.171 Run status group 0 (all jobs): 00:20:20.171 READ: bw=47.4MiB/s (49.7MB/s), 47.4MiB/s-47.4MiB/s (49.7MB/s-49.7MB/s), io=474MiB (497MB), run=10001-10001msec 00:20:20.171 WRITE: bw=49.8MiB/s (52.2MB/s), 49.8MiB/s-49.8MiB/s (52.2MB/s-52.2MB/s), io=492MiB (516MB), run=9875-9875msec 00:20:20.431 ----------------------------------------------------- 00:20:20.431 Suppressions used: 00:20:20.431 count bytes template 00:20:20.431 1 7 /usr/src/fio/parse.c 00:20:20.431 833 79968 /usr/src/fio/iolog.c 00:20:20.431 1 8 libtcmalloc_minimal.so 00:20:20.431 1 904 libcrypto.so 00:20:20.431 ----------------------------------------------------- 00:20:20.431 00:20:20.692 00:20:20.692 real 0m12.854s 00:20:20.692 user 0m12.943s 00:20:20.692 sys 0m0.929s 00:20:20.692 03:23:03 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:20.692 03:23:03 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:20:20.692 ************************************ 00:20:20.692 END TEST bdev_fio_rw_verify 00:20:20.692 ************************************ 00:20:20.692 03:23:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:20:20.692 03:23:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:20.692 03:23:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:20:20.692 03:23:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:20.692 03:23:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:20:20.692 03:23:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:20:20.692 03:23:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:20:20.692 03:23:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:20:20.692 03:23:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:20:20.692 03:23:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:20:20.692 03:23:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:20:20.692 03:23:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:20.692 03:23:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:20:20.692 03:23:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:20:20.692 03:23:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:20:20.692 03:23:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:20:20.692 03:23:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:20:20.692 03:23:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "2c39a741-0b02-41d4-8b5a-ee67c94ce337"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "2c39a741-0b02-41d4-8b5a-ee67c94ce337",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "2c39a741-0b02-41d4-8b5a-ee67c94ce337",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "848733f7-ea55-4698-8050-d693511e0668",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "bd8482ba-f31e-4dc8-937b-95bf5f2a9aea",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "ecb2f7a6-ce0f-476f-aed8-c8a0f6ab03a1",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:20:20.692 03:23:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:20:20.692 03:23:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:20:20.692 03:23:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:20:20.692 /home/vagrant/spdk_repo/spdk 00:20:20.692 03:23:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:20:20.692 03:23:03 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:20:20.692 00:20:20.692 real 0m13.144s 00:20:20.692 user 0m13.070s 00:20:20.692 sys 0m1.060s 00:20:20.692 03:23:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:20.692 03:23:03 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:20:20.692 ************************************ 00:20:20.692 END TEST bdev_fio 00:20:20.692 ************************************ 00:20:20.692 03:23:03 blockdev_raid5f -- bdev/blockdev.sh@774 -- # trap cleanup SIGINT SIGTERM EXIT 00:20:20.692 03:23:03 blockdev_raid5f -- bdev/blockdev.sh@776 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:20.692 03:23:03 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:20:20.692 03:23:03 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:20.692 03:23:03 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:20.692 ************************************ 00:20:20.692 START TEST bdev_verify 00:20:20.692 ************************************ 00:20:20.692 03:23:03 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:20:20.952 [2024-10-09 03:23:04.036059] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:20:20.952 [2024-10-09 03:23:04.036166] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90848 ] 00:20:20.952 [2024-10-09 03:23:04.202412] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:21.212 [2024-10-09 03:23:04.449549] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:20:21.212 [2024-10-09 03:23:04.449578] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.781 Running I/O for 5 seconds... 00:20:24.101 10949.00 IOPS, 42.77 MiB/s [2024-10-09T03:23:08.372Z] 11064.00 IOPS, 43.22 MiB/s [2024-10-09T03:23:09.310Z] 11070.00 IOPS, 43.24 MiB/s [2024-10-09T03:23:10.248Z] 11044.75 IOPS, 43.14 MiB/s [2024-10-09T03:23:10.248Z] 11073.20 IOPS, 43.25 MiB/s 00:20:26.945 Latency(us) 00:20:26.945 [2024-10-09T03:23:10.248Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:26.945 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:20:26.945 Verification LBA range: start 0x0 length 0x2000 00:20:26.945 raid5f : 5.01 6719.96 26.25 0.00 0.00 28680.07 1631.25 20948.63 00:20:26.945 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:26.945 Verification LBA range: start 0x2000 length 0x2000 00:20:26.945 raid5f : 5.02 4343.48 16.97 0.00 0.00 44249.49 142.20 33884.12 00:20:26.945 [2024-10-09T03:23:10.248Z] =================================================================================================================== 00:20:26.945 [2024-10-09T03:23:10.248Z] Total : 11063.44 43.22 0.00 0.00 34799.44 142.20 33884.12 00:20:28.854 00:20:28.854 real 0m7.734s 00:20:28.854 user 0m13.922s 00:20:28.854 sys 0m0.404s 00:20:28.854 03:23:11 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:28.854 03:23:11 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:20:28.854 ************************************ 00:20:28.854 END TEST bdev_verify 00:20:28.854 ************************************ 00:20:28.854 03:23:11 blockdev_raid5f -- bdev/blockdev.sh@777 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:28.854 03:23:11 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 16 -le 1 ']' 00:20:28.854 03:23:11 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:28.854 03:23:11 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:28.854 ************************************ 00:20:28.854 START TEST bdev_verify_big_io 00:20:28.854 ************************************ 00:20:28.854 03:23:11 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:20:28.854 [2024-10-09 03:23:11.848441] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:20:28.854 [2024-10-09 03:23:11.848560] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90946 ] 00:20:28.854 [2024-10-09 03:23:12.015547] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 2 00:20:29.113 [2024-10-09 03:23:12.259245] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:29.113 [2024-10-09 03:23:12.259279] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 1 00:20:29.683 Running I/O for 5 seconds... 00:20:31.634 633.00 IOPS, 39.56 MiB/s [2024-10-09T03:23:16.318Z] 761.00 IOPS, 47.56 MiB/s [2024-10-09T03:23:17.257Z] 803.00 IOPS, 50.19 MiB/s [2024-10-09T03:23:18.196Z] 808.75 IOPS, 50.55 MiB/s [2024-10-09T03:23:18.196Z] 812.00 IOPS, 50.75 MiB/s 00:20:34.893 Latency(us) 00:20:34.893 [2024-10-09T03:23:18.196Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:34.893 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:20:34.893 Verification LBA range: start 0x0 length 0x200 00:20:34.893 raid5f : 5.19 464.74 29.05 0.00 0.00 6916811.30 183.34 298546.53 00:20:34.893 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:20:34.893 Verification LBA range: start 0x200 length 0x200 00:20:34.893 raid5f : 5.26 350.13 21.88 0.00 0.00 8987765.12 185.12 380967.35 00:20:34.893 [2024-10-09T03:23:18.196Z] =================================================================================================================== 00:20:34.893 [2024-10-09T03:23:18.196Z] Total : 814.87 50.93 0.00 0.00 7812780.03 183.34 380967.35 00:20:36.801 00:20:36.801 real 0m7.974s 00:20:36.801 user 0m14.426s 00:20:36.801 sys 0m0.382s 00:20:36.801 03:23:19 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:36.801 03:23:19 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:20:36.801 ************************************ 00:20:36.801 END TEST bdev_verify_big_io 00:20:36.801 ************************************ 00:20:36.802 03:23:19 blockdev_raid5f -- bdev/blockdev.sh@778 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:36.802 03:23:19 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:20:36.802 03:23:19 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:36.802 03:23:19 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:36.802 ************************************ 00:20:36.802 START TEST bdev_write_zeroes 00:20:36.802 ************************************ 00:20:36.802 03:23:19 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:36.802 [2024-10-09 03:23:19.889778] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:20:36.802 [2024-10-09 03:23:19.889913] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91050 ] 00:20:36.802 [2024-10-09 03:23:20.052060] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.061 [2024-10-09 03:23:20.297354] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.630 Running I/O for 1 seconds... 00:20:39.013 29943.00 IOPS, 116.96 MiB/s 00:20:39.013 Latency(us) 00:20:39.013 [2024-10-09T03:23:22.316Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:39.013 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:20:39.013 raid5f : 1.01 29915.42 116.86 0.00 0.00 4265.56 1373.68 5895.38 00:20:39.013 [2024-10-09T03:23:22.316Z] =================================================================================================================== 00:20:39.013 [2024-10-09T03:23:22.316Z] Total : 29915.42 116.86 0.00 0.00 4265.56 1373.68 5895.38 00:20:40.396 00:20:40.396 real 0m3.709s 00:20:40.396 user 0m3.204s 00:20:40.396 sys 0m0.376s 00:20:40.396 03:23:23 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:40.396 03:23:23 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:20:40.396 ************************************ 00:20:40.396 END TEST bdev_write_zeroes 00:20:40.396 ************************************ 00:20:40.396 03:23:23 blockdev_raid5f -- bdev/blockdev.sh@781 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:40.396 03:23:23 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:20:40.396 03:23:23 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:40.396 03:23:23 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:40.396 ************************************ 00:20:40.396 START TEST bdev_json_nonenclosed 00:20:40.396 ************************************ 00:20:40.396 03:23:23 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:40.396 [2024-10-09 03:23:23.671099] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:20:40.396 [2024-10-09 03:23:23.671205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91113 ] 00:20:40.656 [2024-10-09 03:23:23.838314] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.916 [2024-10-09 03:23:24.089791] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.916 [2024-10-09 03:23:24.089933] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:20:40.916 [2024-10-09 03:23:24.089956] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:40.916 [2024-10-09 03:23:24.089969] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:41.489 00:20:41.489 real 0m0.935s 00:20:41.489 user 0m0.654s 00:20:41.489 sys 0m0.174s 00:20:41.489 03:23:24 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:41.489 03:23:24 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:20:41.489 ************************************ 00:20:41.489 END TEST bdev_json_nonenclosed 00:20:41.489 ************************************ 00:20:41.489 03:23:24 blockdev_raid5f -- bdev/blockdev.sh@784 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:41.489 03:23:24 blockdev_raid5f -- common/autotest_common.sh@1101 -- # '[' 13 -le 1 ']' 00:20:41.489 03:23:24 blockdev_raid5f -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:41.489 03:23:24 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:41.489 ************************************ 00:20:41.489 START TEST bdev_json_nonarray 00:20:41.489 ************************************ 00:20:41.489 03:23:24 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1125 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:20:41.489 [2024-10-09 03:23:24.681133] Starting SPDK v25.01-pre git sha1 3c4904078 / DPDK 24.03.0 initialization... 00:20:41.489 [2024-10-09 03:23:24.681262] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91140 ] 00:20:41.748 [2024-10-09 03:23:24.847267] app.c: 917:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.008 [2024-10-09 03:23:25.072993] reactor.c:1001:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.008 [2024-10-09 03:23:25.073379] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:20:42.008 [2024-10-09 03:23:25.073471] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:20:42.008 [2024-10-09 03:23:25.073519] app.c:1062:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:42.268 00:20:42.268 real 0m0.904s 00:20:42.268 user 0m0.634s 00:20:42.268 sys 0m0.163s 00:20:42.268 03:23:25 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:42.268 03:23:25 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:20:42.268 ************************************ 00:20:42.268 END TEST bdev_json_nonarray 00:20:42.268 ************************************ 00:20:42.268 03:23:25 blockdev_raid5f -- bdev/blockdev.sh@786 -- # [[ raid5f == bdev ]] 00:20:42.268 03:23:25 blockdev_raid5f -- bdev/blockdev.sh@793 -- # [[ raid5f == gpt ]] 00:20:42.268 03:23:25 blockdev_raid5f -- bdev/blockdev.sh@797 -- # [[ raid5f == crypto_sw ]] 00:20:42.268 03:23:25 blockdev_raid5f -- bdev/blockdev.sh@809 -- # trap - SIGINT SIGTERM EXIT 00:20:42.268 03:23:25 blockdev_raid5f -- bdev/blockdev.sh@810 -- # cleanup 00:20:42.268 03:23:25 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:20:42.268 03:23:25 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:20:42.268 03:23:25 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:20:42.268 03:23:25 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:20:42.268 03:23:25 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:20:42.268 03:23:25 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:20:42.268 00:20:42.268 real 0m51.709s 00:20:42.268 user 1m7.837s 00:20:42.268 sys 0m6.084s 00:20:42.268 03:23:25 blockdev_raid5f -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:42.529 03:23:25 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:20:42.529 ************************************ 00:20:42.529 END TEST blockdev_raid5f 00:20:42.529 ************************************ 00:20:42.529 03:23:25 -- spdk/autotest.sh@194 -- # uname -s 00:20:42.529 03:23:25 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:20:42.529 03:23:25 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:42.529 03:23:25 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:20:42.529 03:23:25 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:20:42.529 03:23:25 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:20:42.529 03:23:25 -- spdk/autotest.sh@256 -- # timing_exit lib 00:20:42.529 03:23:25 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:42.529 03:23:25 -- common/autotest_common.sh@10 -- # set +x 00:20:42.529 03:23:25 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:20:42.529 03:23:25 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:20:42.529 03:23:25 -- spdk/autotest.sh@272 -- # '[' 0 -eq 1 ']' 00:20:42.529 03:23:25 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:20:42.529 03:23:25 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:20:42.529 03:23:25 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:20:42.529 03:23:25 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:20:42.529 03:23:25 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:20:42.529 03:23:25 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:20:42.529 03:23:25 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:20:42.529 03:23:25 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:20:42.529 03:23:25 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:20:42.529 03:23:25 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:20:42.529 03:23:25 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:20:42.529 03:23:25 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:20:42.529 03:23:25 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:20:42.529 03:23:25 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:20:42.529 03:23:25 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:20:42.529 03:23:25 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:20:42.529 03:23:25 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:20:42.529 03:23:25 -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:42.529 03:23:25 -- common/autotest_common.sh@10 -- # set +x 00:20:42.529 03:23:25 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:20:42.529 03:23:25 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:20:42.529 03:23:25 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:20:42.529 03:23:25 -- common/autotest_common.sh@10 -- # set +x 00:20:45.071 INFO: APP EXITING 00:20:45.071 INFO: killing all VMs 00:20:45.071 INFO: killing vhost app 00:20:45.071 INFO: EXIT DONE 00:20:45.331 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:45.331 Waiting for block devices as requested 00:20:45.331 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:20:45.591 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:20:46.531 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:20:46.531 Cleaning 00:20:46.531 Removing: /var/run/dpdk/spdk0/config 00:20:46.531 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:20:46.531 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:20:46.531 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:20:46.531 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:20:46.531 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:20:46.531 Removing: /var/run/dpdk/spdk0/hugepage_info 00:20:46.531 Removing: /dev/shm/spdk_tgt_trace.pid56925 00:20:46.531 Removing: /var/run/dpdk/spdk0 00:20:46.531 Removing: /var/run/dpdk/spdk_pid56690 00:20:46.531 Removing: /var/run/dpdk/spdk_pid56925 00:20:46.531 Removing: /var/run/dpdk/spdk_pid57160 00:20:46.531 Removing: /var/run/dpdk/spdk_pid57270 00:20:46.531 Removing: /var/run/dpdk/spdk_pid57326 00:20:46.531 Removing: /var/run/dpdk/spdk_pid57454 00:20:46.531 Removing: /var/run/dpdk/spdk_pid57483 00:20:46.531 Removing: /var/run/dpdk/spdk_pid57693 00:20:46.531 Removing: /var/run/dpdk/spdk_pid57806 00:20:46.531 Removing: /var/run/dpdk/spdk_pid57913 00:20:46.531 Removing: /var/run/dpdk/spdk_pid58040 00:20:46.531 Removing: /var/run/dpdk/spdk_pid58148 00:20:46.531 Removing: /var/run/dpdk/spdk_pid58188 00:20:46.531 Removing: /var/run/dpdk/spdk_pid58230 00:20:46.531 Removing: /var/run/dpdk/spdk_pid58306 00:20:46.531 Removing: /var/run/dpdk/spdk_pid58434 00:20:46.531 Removing: /var/run/dpdk/spdk_pid58887 00:20:46.531 Removing: /var/run/dpdk/spdk_pid58962 00:20:46.531 Removing: /var/run/dpdk/spdk_pid59038 00:20:46.531 Removing: /var/run/dpdk/spdk_pid59054 00:20:46.531 Removing: /var/run/dpdk/spdk_pid59205 00:20:46.531 Removing: /var/run/dpdk/spdk_pid59221 00:20:46.531 Removing: /var/run/dpdk/spdk_pid59375 00:20:46.531 Removing: /var/run/dpdk/spdk_pid59391 00:20:46.792 Removing: /var/run/dpdk/spdk_pid59466 00:20:46.792 Removing: /var/run/dpdk/spdk_pid59484 00:20:46.792 Removing: /var/run/dpdk/spdk_pid59559 00:20:46.792 Removing: /var/run/dpdk/spdk_pid59577 00:20:46.792 Removing: /var/run/dpdk/spdk_pid59783 00:20:46.792 Removing: /var/run/dpdk/spdk_pid59820 00:20:46.792 Removing: /var/run/dpdk/spdk_pid59911 00:20:46.792 Removing: /var/run/dpdk/spdk_pid61288 00:20:46.792 Removing: /var/run/dpdk/spdk_pid61505 00:20:46.792 Removing: /var/run/dpdk/spdk_pid61645 00:20:46.792 Removing: /var/run/dpdk/spdk_pid62305 00:20:46.792 Removing: /var/run/dpdk/spdk_pid62511 00:20:46.792 Removing: /var/run/dpdk/spdk_pid62662 00:20:46.792 Removing: /var/run/dpdk/spdk_pid63311 00:20:46.792 Removing: /var/run/dpdk/spdk_pid63641 00:20:46.792 Removing: /var/run/dpdk/spdk_pid63781 00:20:46.792 Removing: /var/run/dpdk/spdk_pid65172 00:20:46.792 Removing: /var/run/dpdk/spdk_pid65431 00:20:46.792 Removing: /var/run/dpdk/spdk_pid65577 00:20:46.792 Removing: /var/run/dpdk/spdk_pid66973 00:20:46.792 Removing: /var/run/dpdk/spdk_pid67226 00:20:46.792 Removing: /var/run/dpdk/spdk_pid67372 00:20:46.792 Removing: /var/run/dpdk/spdk_pid68768 00:20:46.792 Removing: /var/run/dpdk/spdk_pid69220 00:20:46.792 Removing: /var/run/dpdk/spdk_pid69365 00:20:46.792 Removing: /var/run/dpdk/spdk_pid70869 00:20:46.792 Removing: /var/run/dpdk/spdk_pid71130 00:20:46.792 Removing: /var/run/dpdk/spdk_pid71276 00:20:46.792 Removing: /var/run/dpdk/spdk_pid72789 00:20:46.792 Removing: /var/run/dpdk/spdk_pid73055 00:20:46.792 Removing: /var/run/dpdk/spdk_pid73212 00:20:46.792 Removing: /var/run/dpdk/spdk_pid74719 00:20:46.792 Removing: /var/run/dpdk/spdk_pid75211 00:20:46.792 Removing: /var/run/dpdk/spdk_pid75358 00:20:46.792 Removing: /var/run/dpdk/spdk_pid75536 00:20:46.792 Removing: /var/run/dpdk/spdk_pid75973 00:20:46.792 Removing: /var/run/dpdk/spdk_pid76707 00:20:46.792 Removing: /var/run/dpdk/spdk_pid77103 00:20:46.792 Removing: /var/run/dpdk/spdk_pid77786 00:20:46.792 Removing: /var/run/dpdk/spdk_pid78238 00:20:46.792 Removing: /var/run/dpdk/spdk_pid78997 00:20:46.792 Removing: /var/run/dpdk/spdk_pid79406 00:20:46.792 Removing: /var/run/dpdk/spdk_pid81405 00:20:46.792 Removing: /var/run/dpdk/spdk_pid81849 00:20:46.792 Removing: /var/run/dpdk/spdk_pid82294 00:20:46.792 Removing: /var/run/dpdk/spdk_pid84392 00:20:46.792 Removing: /var/run/dpdk/spdk_pid84878 00:20:46.792 Removing: /var/run/dpdk/spdk_pid85394 00:20:46.792 Removing: /var/run/dpdk/spdk_pid86462 00:20:46.792 Removing: /var/run/dpdk/spdk_pid86788 00:20:46.792 Removing: /var/run/dpdk/spdk_pid87740 00:20:46.792 Removing: /var/run/dpdk/spdk_pid88063 00:20:46.792 Removing: /var/run/dpdk/spdk_pid89003 00:20:46.792 Removing: /var/run/dpdk/spdk_pid89330 00:20:47.052 Removing: /var/run/dpdk/spdk_pid90013 00:20:47.052 Removing: /var/run/dpdk/spdk_pid90293 00:20:47.052 Removing: /var/run/dpdk/spdk_pid90366 00:20:47.052 Removing: /var/run/dpdk/spdk_pid90418 00:20:47.052 Removing: /var/run/dpdk/spdk_pid90669 00:20:47.052 Removing: /var/run/dpdk/spdk_pid90848 00:20:47.052 Removing: /var/run/dpdk/spdk_pid90946 00:20:47.052 Removing: /var/run/dpdk/spdk_pid91050 00:20:47.052 Removing: /var/run/dpdk/spdk_pid91113 00:20:47.052 Removing: /var/run/dpdk/spdk_pid91140 00:20:47.052 Clean 00:20:47.052 03:23:30 -- common/autotest_common.sh@1451 -- # return 0 00:20:47.052 03:23:30 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:20:47.052 03:23:30 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:47.052 03:23:30 -- common/autotest_common.sh@10 -- # set +x 00:20:47.052 03:23:30 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:20:47.052 03:23:30 -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:47.052 03:23:30 -- common/autotest_common.sh@10 -- # set +x 00:20:47.052 03:23:30 -- spdk/autotest.sh@388 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:20:47.052 03:23:30 -- spdk/autotest.sh@390 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:20:47.052 03:23:30 -- spdk/autotest.sh@390 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:20:47.052 03:23:30 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:20:47.313 03:23:30 -- spdk/autotest.sh@394 -- # hostname 00:20:47.313 03:23:30 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:20:47.313 geninfo: WARNING: invalid characters removed from testname! 00:21:09.265 03:23:51 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:11.806 03:23:54 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:13.716 03:23:56 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:15.626 03:23:58 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:18.184 03:24:01 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:20.093 03:24:03 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:21:22.003 03:24:05 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:21:22.263 03:24:05 -- common/autotest_common.sh@1680 -- $ [[ y == y ]] 00:21:22.263 03:24:05 -- common/autotest_common.sh@1681 -- $ lcov --version 00:21:22.263 03:24:05 -- common/autotest_common.sh@1681 -- $ awk '{print $NF}' 00:21:22.263 03:24:05 -- common/autotest_common.sh@1681 -- $ lt 1.15 2 00:21:22.263 03:24:05 -- scripts/common.sh@373 -- $ cmp_versions 1.15 '<' 2 00:21:22.263 03:24:05 -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:21:22.263 03:24:05 -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:21:22.263 03:24:05 -- scripts/common.sh@336 -- $ IFS=.-: 00:21:22.263 03:24:05 -- scripts/common.sh@336 -- $ read -ra ver1 00:21:22.263 03:24:05 -- scripts/common.sh@337 -- $ IFS=.-: 00:21:22.263 03:24:05 -- scripts/common.sh@337 -- $ read -ra ver2 00:21:22.263 03:24:05 -- scripts/common.sh@338 -- $ local 'op=<' 00:21:22.263 03:24:05 -- scripts/common.sh@340 -- $ ver1_l=2 00:21:22.264 03:24:05 -- scripts/common.sh@341 -- $ ver2_l=1 00:21:22.264 03:24:05 -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:21:22.264 03:24:05 -- scripts/common.sh@344 -- $ case "$op" in 00:21:22.264 03:24:05 -- scripts/common.sh@345 -- $ : 1 00:21:22.264 03:24:05 -- scripts/common.sh@364 -- $ (( v = 0 )) 00:21:22.264 03:24:05 -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:22.264 03:24:05 -- scripts/common.sh@365 -- $ decimal 1 00:21:22.264 03:24:05 -- scripts/common.sh@353 -- $ local d=1 00:21:22.264 03:24:05 -- scripts/common.sh@354 -- $ [[ 1 =~ ^[0-9]+$ ]] 00:21:22.264 03:24:05 -- scripts/common.sh@355 -- $ echo 1 00:21:22.264 03:24:05 -- scripts/common.sh@365 -- $ ver1[v]=1 00:21:22.264 03:24:05 -- scripts/common.sh@366 -- $ decimal 2 00:21:22.264 03:24:05 -- scripts/common.sh@353 -- $ local d=2 00:21:22.264 03:24:05 -- scripts/common.sh@354 -- $ [[ 2 =~ ^[0-9]+$ ]] 00:21:22.264 03:24:05 -- scripts/common.sh@355 -- $ echo 2 00:21:22.264 03:24:05 -- scripts/common.sh@366 -- $ ver2[v]=2 00:21:22.264 03:24:05 -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:21:22.264 03:24:05 -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:21:22.264 03:24:05 -- scripts/common.sh@368 -- $ return 0 00:21:22.264 03:24:05 -- common/autotest_common.sh@1682 -- $ lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:22.264 03:24:05 -- common/autotest_common.sh@1694 -- $ export 'LCOV_OPTS= 00:21:22.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.264 --rc genhtml_branch_coverage=1 00:21:22.264 --rc genhtml_function_coverage=1 00:21:22.264 --rc genhtml_legend=1 00:21:22.264 --rc geninfo_all_blocks=1 00:21:22.264 --rc geninfo_unexecuted_blocks=1 00:21:22.264 00:21:22.264 ' 00:21:22.264 03:24:05 -- common/autotest_common.sh@1694 -- $ LCOV_OPTS=' 00:21:22.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.264 --rc genhtml_branch_coverage=1 00:21:22.264 --rc genhtml_function_coverage=1 00:21:22.264 --rc genhtml_legend=1 00:21:22.264 --rc geninfo_all_blocks=1 00:21:22.264 --rc geninfo_unexecuted_blocks=1 00:21:22.264 00:21:22.264 ' 00:21:22.264 03:24:05 -- common/autotest_common.sh@1695 -- $ export 'LCOV=lcov 00:21:22.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.264 --rc genhtml_branch_coverage=1 00:21:22.264 --rc genhtml_function_coverage=1 00:21:22.264 --rc genhtml_legend=1 00:21:22.264 --rc geninfo_all_blocks=1 00:21:22.264 --rc geninfo_unexecuted_blocks=1 00:21:22.264 00:21:22.264 ' 00:21:22.264 03:24:05 -- common/autotest_common.sh@1695 -- $ LCOV='lcov 00:21:22.264 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:22.264 --rc genhtml_branch_coverage=1 00:21:22.264 --rc genhtml_function_coverage=1 00:21:22.264 --rc genhtml_legend=1 00:21:22.264 --rc geninfo_all_blocks=1 00:21:22.264 --rc geninfo_unexecuted_blocks=1 00:21:22.264 00:21:22.264 ' 00:21:22.264 03:24:05 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:22.264 03:24:05 -- scripts/common.sh@15 -- $ shopt -s extglob 00:21:22.264 03:24:05 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:21:22.264 03:24:05 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:22.264 03:24:05 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:22.264 03:24:05 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.264 03:24:05 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.264 03:24:05 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.264 03:24:05 -- paths/export.sh@5 -- $ export PATH 00:21:22.264 03:24:05 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:22.264 03:24:05 -- common/autobuild_common.sh@485 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:21:22.264 03:24:05 -- common/autobuild_common.sh@486 -- $ date +%s 00:21:22.264 03:24:05 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1728444245.XXXXXX 00:21:22.264 03:24:05 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1728444245.BvZSz2 00:21:22.264 03:24:05 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:21:22.264 03:24:05 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:21:22.264 03:24:05 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:21:22.264 03:24:05 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:21:22.264 03:24:05 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:21:22.264 03:24:05 -- common/autobuild_common.sh@502 -- $ get_config_params 00:21:22.264 03:24:05 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:21:22.264 03:24:05 -- common/autotest_common.sh@10 -- $ set +x 00:21:22.264 03:24:05 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:21:22.264 03:24:05 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:21:22.264 03:24:05 -- pm/common@17 -- $ local monitor 00:21:22.264 03:24:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:22.264 03:24:05 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:22.264 03:24:05 -- pm/common@25 -- $ sleep 1 00:21:22.264 03:24:05 -- pm/common@21 -- $ date +%s 00:21:22.264 03:24:05 -- pm/common@21 -- $ date +%s 00:21:22.264 03:24:05 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728444245 00:21:22.264 03:24:05 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1728444245 00:21:22.264 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728444245_collect-vmstat.pm.log 00:21:22.264 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1728444245_collect-cpu-load.pm.log 00:21:23.202 03:24:06 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:21:23.202 03:24:06 -- spdk/autopackage.sh@10 -- $ [[ 0 -eq 1 ]] 00:21:23.202 03:24:06 -- spdk/autopackage.sh@14 -- $ timing_finish 00:21:23.202 03:24:06 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:21:23.202 03:24:06 -- common/autotest_common.sh@737 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:21:23.202 03:24:06 -- common/autotest_common.sh@740 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:21:23.462 03:24:06 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:21:23.462 03:24:06 -- pm/common@29 -- $ signal_monitor_resources TERM 00:21:23.462 03:24:06 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:21:23.462 03:24:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:23.462 03:24:06 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:21:23.462 03:24:06 -- pm/common@44 -- $ pid=92666 00:21:23.462 03:24:06 -- pm/common@50 -- $ kill -TERM 92666 00:21:23.462 03:24:06 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:23.462 03:24:06 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:21:23.462 03:24:06 -- pm/common@44 -- $ pid=92668 00:21:23.462 03:24:06 -- pm/common@50 -- $ kill -TERM 92668 00:21:23.462 + [[ -n 5420 ]] 00:21:23.462 + sudo kill 5420 00:21:23.471 [Pipeline] } 00:21:23.486 [Pipeline] // timeout 00:21:23.491 [Pipeline] } 00:21:23.504 [Pipeline] // stage 00:21:23.509 [Pipeline] } 00:21:23.522 [Pipeline] // catchError 00:21:23.530 [Pipeline] stage 00:21:23.532 [Pipeline] { (Stop VM) 00:21:23.544 [Pipeline] sh 00:21:23.827 + vagrant halt 00:21:26.364 ==> default: Halting domain... 00:21:34.511 [Pipeline] sh 00:21:34.794 + vagrant destroy -f 00:21:37.335 ==> default: Removing domain... 00:21:37.347 [Pipeline] sh 00:21:37.661 + mv output /var/jenkins/workspace/raid-vg-autotest_2/output 00:21:37.672 [Pipeline] } 00:21:37.687 [Pipeline] // stage 00:21:37.692 [Pipeline] } 00:21:37.706 [Pipeline] // dir 00:21:37.712 [Pipeline] } 00:21:37.726 [Pipeline] // wrap 00:21:37.733 [Pipeline] } 00:21:37.746 [Pipeline] // catchError 00:21:37.755 [Pipeline] stage 00:21:37.758 [Pipeline] { (Epilogue) 00:21:37.770 [Pipeline] sh 00:21:38.056 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:21:42.266 [Pipeline] catchError 00:21:42.268 [Pipeline] { 00:21:42.282 [Pipeline] sh 00:21:42.568 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:21:42.568 Artifacts sizes are good 00:21:42.577 [Pipeline] } 00:21:42.591 [Pipeline] // catchError 00:21:42.602 [Pipeline] archiveArtifacts 00:21:42.610 Archiving artifacts 00:21:42.717 [Pipeline] cleanWs 00:21:42.729 [WS-CLEANUP] Deleting project workspace... 00:21:42.729 [WS-CLEANUP] Deferred wipeout is used... 00:21:42.736 [WS-CLEANUP] done 00:21:42.738 [Pipeline] } 00:21:42.753 [Pipeline] // stage 00:21:42.758 [Pipeline] } 00:21:42.785 [Pipeline] // node 00:21:42.802 [Pipeline] End of Pipeline 00:21:42.857 Finished: SUCCESS